id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
7881297 | https://en.wikipedia.org/wiki/5th%20Signal%20Command%20%28United%20States%29 | 5th Signal Command (United States) | The 5th Signal Command (Theater) ("Dragon Warriors") was a European-based tactical and strategic communications organization of the United States Army specializing in command and control which supported theater-limited, joint-forces, and combined forces activities. The command's mission was to build, operate and defend network capabilities to enable mission command and create tactical, operational and strategic flexibility for Army, Joint and Multinational forces in the EUCOM and AFRICOM areas of responsibility.
History
Formation
This section contains public-domain text taken from The History of the 5th Signal Command
The 5th Signal Command was constituted 1 July 1974 in the Regular Army as Headquarters and Headquarters Company, 5th Signal Command, and activated in Germany.
Headquartered in Wiesbaden, Germany, provided forward-based deployable command and control communications supporting theater, joint, and combined forces. It used the Global Information Grid (GIG) to enable extension and reachback capabilities for the Commander, United States European Command (EUCOM). The command's primary focus was to support U.S. Army units and organizations based in Europe. Likewise, EUCOM headquarters also has support from Department of Defense-level communications organizations that support networks and services not provided by 5th Signal Command.
Consisting of the 2nd Signal Brigade, the command was a major subordinate command of the U.S. Army Network Enterprise Technology Command (NETCOM) / 9th Signal Command (Army), headquartered at Fort Huachuca, Arizona. However, the Commanding General of U.S. Army, Europe (USAREUR) and Seventh Army was assigned operational control of the command. 5th Signal Command's commanding general also served as the deputy chief of staff, G-6 (chief information officer) for USAREUR and Seventh Army.
Headquarters, 5th Signal Command was constituted in the Regular Army and activated in Germany on 1 July 1974. The Command traces its original heritage to the U.S. Army Signal Command, Europe, organized under USAREUR General Order dated 20 March 1958, which consolidated military communications in the European Theater. It consisted of the 4th and 516th Signal Groups and 102nd Signal Battalion supporting Army Group, Central Europe; NATO; USAREUR; and other elements in Europe as directed.
The organization expanded from 1961 to 1964, adding 22nd and 106th Signal Groups, with theater responsibilities extending from Belgium, through France and Germany, to Italy. The effort to meet the challenges of rapid growth in technology and communications prompted the birth of United States Army Strategic Communications Command (USSTRATCOM) in Washington, D.C., in March 1964. Its role was to manage the Army's portion of military global communications. The first subordinate command USSTRATCOM formed was STRATCOM-Europe, established 1 July 1964, in Schwetzingen, West Germany.
STRATCOM-Europe absorbed 22nd and 106th Signal Groups and other communications responsibilities from USAREUR. By the end of 1965, all USAREUR communications duties, and even the position of USAREUR Deputy Chief of Staff for Communications–Electronics, had been transferred to STRATCOM-Europe. Changes in signals/military communications continued through the 1970s; 7th Signal Brigade was activated in 1970 from assets of the deactivated Seventh Army communications command. STRATCOM-Europe assumed operational control of the brigade in June 1972 and was redesignated as Army Communications Command-Europe (ACC-E) in October 1973. The 106th and 516th Signal Groups were also inactivated during this time and replaced by the 4th Signal Group.
During the summer of 1974, ACC-E reorganized as Headquarters, 5th Signal Command at Kilbourne Kaserne in Schwetzingen. The reorganization called for the activation of 2nd and 160th Signal Groups from resources of inactivated units from the 22nd and 4th Signal Groups and the assignment of the 6981st Labor Service Group and 72nd Signal Battalion to 5th Signal Command. Additionally, the Command relocated to Taukkunen Barracks, Worms, Germany, in August 1974, and the 12th Signal Group was inactivated by July 1975. 7th Signal Brigade remained under 5th Signal Command's operational control until 1981, when it was officially assigned to the Command.
In the 1980s, 5th Signal Command embarked upon wide-ranging upgrades of its strategic communications equipment in Europe. The Hitler-era Reichspost 40 Strowger switches were replaced by KN-101 electronic switching systems manufactured by Siemens. Likewise, record (message) traffic centers were upgraded with more powerful computer hardware and the Army's microwave backbone network in Europe was modernized with digital radio equipment, and in certain locations, concrete radio towers.
The collapse of communism, dismantlement of the Soviet Union, and disintegration of the Soviet Union introduced a new international world and prompted an Army-wide drawdown. This resulted in changes to military policy during the late 1980s and early 1990s. Warming superpower relations induced a period of adjustment and 5th Signal Command adjusted accordingly by: inactivating the 160th Signal Brigade and consolidating its units into the 2nd Signal Brigade; inactivating the 1st Signal Battalion of 7th Signal Brigade; and relocating the US 63rd Signal Battalion to Fort Gordon Georgia. The resulting organizational structure remains essentially intact today. The 2d Signal Brigade comprises the 39th, 43d, 52d, 69th, 102d, and 509th Signal Battalions. The 7th Signal Brigade comprised the 44th Signal Battalion and 72nd Signal Battalions until inactivation on 16 May 2014. In addition, the 22d Signal Brigade and its subordinate units were briefly assigned to 5th Signal Command prior to the brigade's inactivation on 22 May 2007.
Base closures accompanied troop drawdown. The closure of the Worms military community brought the command to its current home at Funari Barracks in Mannheim in September 1996. The closure of the Karlsruhe military community required 7th Signal Brigade and assigned units to relocate to Sullivan and Taylor Barracks, also in Mannheim. The commanding general of 5th Signal Command then became the senior mission commander for the Mannheim military community.
Since the 1990s, 5th Signal Command's subordinate units have maintained a consistently high operational tempo. During Desert Shield and Desert Storm, the Command deployed elements of 7th Signal Brigade to the Persian Gulf. The 44th and 63rd Signal Battalions deployed and attached to the 11th Signal Brigade, supporting Third Army/Army Central Command and XVIII Airborne Corps. The 1st Signal Battalion and the 268th Signal Company from the 72d Signal Battalion also deployed and were attached to VII Corps' 93rd Signal Brigade. In July 1991, the 7th Signal Brigade supported the humanitarian relief and protection efforts for the Kurds during Operation Provide Comfort.
From 1996 to 1998, 7th Signal Brigade deployed to Hungary and Bosnia, in support of Operation Joint Endeavor providing humanitarian efforts in Bosnia and Herzegovina and Croatia. Later in 1999, elements of the Brigade deployed to Albania in support of Task Force Hawk and to Kosovo in support of Task Force Falcon. 2nd Signal Brigade provided major satellite platforms to sustain the operational base in USAREUR during each of these missions.
Twenty-first century
This section contains public-domain text taken from The History of the 5th Signal Command
Since 11 September 2001, 5th Signal Command's role as USAREUR's communication arm has become even more critical in the effort to support the Warfighter. The process to build the infostructure in Europe as part of the larger GIG continues to evolve while our nation is at war. In 2001, 5th Signal Command developed Network Operations and Security Centers in conjunction with Network Service Centers to increase command and control of the expanding network and address security challenges, as well as improving customer service. With the increasing demand for bandwidth and diversity across the USAREUR footprint, 5th Signal Command initiated an intense effort in 2003 to develop the infrastructure with fiber optic connectivity throughout Europe and to begin the elimination of the legacy microwave infrastructure.
After the September 11 terrorist attacks, the United States invasion of Afghanistan, and the 2003 invasion of Iraq, the 5th Signal Command provided deployable communications packages from 2nd Signal Brigade for fort-to-port operations to support deployment and redeployment operations throughout Europe. Efforts to improve command and control communications in USAREUR continued as the Command increases capability of the operational base across Europe to provide quality communications reachback.
The command deployed significant tactical capabilities. 7th Signal Brigade deployed in February 2003 into Turkey and later southern Iraq in support of 4th Infantry Division and the 173rd Airborne Division's invasion of northern Iraq in support of Operation Iraqi Freedom. The ability to establish satellite connectivity in support of Operation Iraqi Freedom leveraged 2nd Signal Brigade's regional bandwidth, switching capabilities, and satellite downlinks into strategic satellite tactical and commercial entry points. This reachback extended the GIG and enabled the commander on the ground to: see friendly and enemy movements; disperse forces and conduct split-based operations; reduce the operational footprint; provide in-transit visibility of supplies, personnel, and equipment; and exploit information dominance. This reachback enhances the decision making and command and control for the commander on the ground.
From January through December 2004, Headquarters, 7th Signal Brigade and 72nd Signal Battalion deployed to Kuwait and Iraq in support of Operation Iraqi Freedom 2, providing tactical communications in support of Combined Forces Land Component Commander in Doha, Kuwait. In March 2005, 7th Signal Brigade deployed Task Force Lightning, comprising elements of 44th and 509th Signal Battalions, to Afghanistan for Operation Enduring Freedom in support of the Southern European Task Force.
On 4 November 2016 it was announced that 5th Signal Command would be decommissioned.
On 4 August 2017 5th Signal Command held a color casing ceremony at Lucius D. Clay Kaserne, Wiesbaden. Most of the 5th's former operations will be taken over by the 2nd Theater Signal Brigade.
Mission Statement and Motto
Mission: Build, operate, and defend network capabilities to support the full range of communication, information technology, and cyber requirements for Army, Joint and Multinational forces in the EUCOM and AFRICOM areas of responsibility.Motto: Dragon Warriors, Any Mission, Anywhere!''
Honors
Campaign participation credit
None
Decorations
Army Superior Unit Award for 2002–2003.
Heraldic items
Shoulder Sleeve Insignia
Description: On an orange shield with a white border in width and in height a stylized green demi-dragon with red eye emitting two black flashes.
Symbolism:
Orange and white are the colors traditionally associated with Signal units.
The demi-dragon alludes to the unit's area of operations in Worms, Germany.
Background: The insignia was authorized on 24 October 1994.
Distinctive Unit Insignia
Description: A silver color metal and enamel device in height overall consisting of five flashes converging at center on a silver disc with three concentric black circles all encircled by an orange scroll inscribed "PROFESSIONAL" at top and "COMMUNICATIONS" in base in silver letters.
Symbolism:
Orange and white (silver) are the colors traditionally associated with the Signal Corps.
The disc with black lines alludes to the globe, the flashes forming lines of longitude, and symbolizes the far reaching scope of the unit's mission.
They also resemble a target, indicating accuracy and efficiency.
The five flashes refer to the unit's numerical designation.
Background: The insignia was authorized on 13 April 1983.
See also
Mannheim
Seventh United States Army
References
External links
Official homepage
Unit listing at globalsecurity.org
Military.com Installation Guide
Unit listing at usarmygermany.com
005
Military units and formations disestablished in 2017 |
68244510 | https://en.wikipedia.org/wiki/Steam%20Deck | Steam Deck | The Steam Deck is a handheld gaming computer developed by Valve in cooperation with Advanced Micro Devices (AMD). Released on February 25, 2022, the device is playable either as a handheld or connected to a monitor in the same manner as the Nintendo Switch. It is an x86-64 device with integrated gaming inputs designed to play the full Steam library, including Windows PC games via the Linux-based Proton compatibility layer. Users are able to modify the device's software to run non-Steam applications and games from other sources.
History
Valve's first foray into hardware was with the Steam Machine, a computer specification based on the Linux-derived SteamOS that could be adopted by any computer manufacturer to make systems optimized for running Steam and games from it. Introduced in 2015, the platform did not sell well and Valve quietly pulled back on it by April 2018, but stated they remained committed to providing some type of open-hardware platform. Steam Deck designer Steve Dalton said "there was always kind of this classic chicken and egg problem with the Steam Machine", as it required the adoption of Linux by both gamers and game developers to reach a critical interest in the machines to draw manufacturers in making them. The lack of Linux game availability during the lifetime of Steam Machines led Valve to invest development into Proton, a Linux-based compatibility layer to allow most Microsoft Windows-based applications and games to be run on Linux without modification.
Other factors from the Steam Machine line worked their way into the conception of the Steam Deck. The Steam Controller was developed by Valve as part of the Steam Machine line. Some of the early prototypes of the controller included a small LCD screen within the middle of the controller which could be programmed as a second screen alongside the game that the user was playing. One idea from this prototype was to include the Steam Link, a hardware device capable of streaming game content from a computer running Steam to a different monitor, here routing that output to the small LCD on the controller. This was later considered by Valve a very early concept behind the Steam Deck. Further, their experience with trying to convince other manufacturers to produce Steam Machines led Valve to realize that it was better to develop all their hardware internally. Dalton said, "More and more it just became kind of clear, the more of this we are doing internally, the more we can kind of make a complete package."
As Valve considered options for bringing a handheld device to market, they set a priority that the device had to be able to play nearly the entirety of the Steam game library, and rejected possible hardware that moved away from the standard x86-based processing structure that would have been easier to implement in handheld form but would have limited what games would be available. Only through recent discussions with AMD and their current product lines was Valve able to identify a technical approach that would meet the goal of a handheld device capable of playing all Steam games without overtaxing the processor unit. The developers considered the Steam Deck to be future-proof. While the specifications are modest compared to high-end gaming computers, they felt that the performance was at a good place that would be acceptable for many years, while still looking at newer software improvements, such as the addition of AMD's FidelityFX Super Resolution (FSR). Though they do not have any current designs for a successor, Valve stated that there would likely be future iterations of the hardware in years to come, but the company expects the timing of releases to depend on the current state of processor technology and handheld device limitations rather than a regular upgrade cycle.
Valve's CEO, Gabe Newell, said of the Steam Deck's approach, "As a gamer, this is a product I've always wanted. And as a game developer, it's the mobile device I've always wanted for our partners." According to Newell, they wanted to be "very aggressive" on the release and pricing strategy as they considered the mobile market as their primary competitor for the Deck. However, their focus was on the unit's performance; Newell stated, "But the first thing was the performance and the experience, [that] was the biggest and most fundamental constraint that was driving this." Newell recognized that the base pricing was somewhat higher than expected and "painful", but necessary to meet the expectation of gamers that would want the Deck. Newell continued that he believed this was a new product category of personal computer hardware that Valve and other computer manufacturers would continue to participate in if the Steam Deck proved successful, and thus it was necessary to keep the unit's price point reasonable to demonstrate viability. The openness of the system was also a key feature according to Newell, as that is a defining "superpower" of the personal computer space over typical console systems. Newell did not want to have any limitations on what the end user could do with the hardware, such as installing alternate non-Steam software on it.
Announcement and release
Rumors that Valve was working on a portable gaming unit had emerged in May 2021, based on updates made within the Steam code pointing towards a new "SteamPal" device, and comments made by Gabe Newell related to Valve developing games for consoles. Ars Technica had been able to confirm that new hardware was in development at Valve.
Valve revealed the Steam Deck on July 15, 2021. The Deck, to be released in three different models based on internal storage options, is expected to ship in February 2022 to the US, Canada, the EU and the UK, with other regions to follow throughout the year. However, due to its popularity, some pre-order purchasers were informed that later shipments of the 64 GB model and 256 GB NVMe models would be in Q2 2022 and the 512 GB NVMe model by Q3 2022. Valve informed pre-purchasers in November 2021 that due to the 2020–2021 global chip shortage, the device would fail to ship by December and instead would ship in February 2022, retaining the same order for delivery based on pre-order placement.
Pre-orders for the Steam Deck were opened on the day after its announcement, on July 16, 2021. Pre-orders were limited to those with Steam accounts opened before June 2021 to prevent resellers from controlling access to the device. First-day pre-order reservations through the Steam storefront briefly crashed the servers due to the demand. By September 2021, development kits for the Steam Deck were shipping to developers. The device was released on February 25, 2022. A promotional video was released on February 28, showing Gabe Newell hand-delivering signed Steam Deck units to customers in the Seattle area, where Valve is located.
Hardware
The Steam Deck includes a custom accelerated processing unit (APU) built by AMD based on their Zen 2 and RDNA 2 architectures, named Aerith based on the Final Fantasy VII character Aerith Gainsborough. The CPU runs a four-core/eight-thread unit and the GPU runs on eight compute units with a total estimated performance of 1.6 TFLOPS. Both the CPU and GPU use variable timing frequencies, with the CPU running between 2.4 and 3.5 GHz and the GPU between 1.0 and 1.6 GHz based on current processor needs. Valve stated that the CPU has comparable performance to Ryzen 3000 desktop computer processors and the GPU performance to the Radeon RX 6000 series. The Deck includes 16 GB of LPDDR5 RAM in a quad-channel configuration, with a total bandwidth of 88 GB/s.
The unit shipped in three models based on internal storage options. The base model includes a 64 GB eMMC internal storage unit, running over PCI Express 2.0 x1. A mid-tier model includes 256 GB of storage through an NVMe SSD device, while the high-end unit includes a 512 GB NVMe SSD storage unit, with the latter two both shipping with drives that run PCI Express 3.0 x4. All 3 SKUs utilize the same M.2 2230 interface for internal storage. Valve stated that the built-in storage is not meant to be replaceable by end-users, though can be replaced as necessary for repair. Additional storage space is available through a microSD card slot, which also supports microSDXC and microSDHC formats.
The Deck's main unit is designed for handheld use. It includes a touchscreen LCD display with a 1280x800 pixel resolution with a fixed 60 Hz refresh rate; games are configured to use Vsync where possible. The unit's input set features two thumbsticks, a directional pad, ABXY buttons, two shoulder buttons on each side of the unit, four additional buttons on the rear of the unit, as well as two trackpads under each thumbstick. The thumbsticks and trackpads use capacitive sensing, and the unit further includes a gyroscope to allow for more specialized controls on the handheld mode. The unit also includes haptic feedback.
The Deck supports Bluetooth connectivity for input devices, including common game controllers, and includes integrated WiFi network support to meet IEEE 802.11a/b/g/n/ac standards. The Deck supports stereo sound out via a digital signal processor and includes both an integrated microphone and a headphone jack. The Deck includes a 40 watt-hour battery, which Valve estimates that for "lighter use cases like game streaming, smaller 2D games, or web browsing" can last between seven and eight hours. Valve estimated that by keeping frame rates to around 30 frames per second (FPS) more intensive games such as Portal 2 could be played for five to six hours. The system's software includes an optional FPS limiter that balance a game's performance to optimize battery life.
At release, Steam Decks were only manufactured in a black casing to reduce the complexity of production, though Valve stated that they have considered introducing other case colors or themes in the future. Valve partnered with iFixit to provide replacement parts for users.
A dock unit was released separately alongside the device. The dock unit can be connected to an external power source to power the Deck, and to an external monitor via either HDMI or DisplayPort protocols to route output from the Deck to that monitor. Though limited by the processor speed, the display output from the Deck via the dock can reach as high as 8k resolution at 60 Hz or 4k resolution at 120 Hz; this resolution boost can also be achieved by attaching the Deck directly through a USB to HDMI adapter without the use of the docking station. There is no other change in performance of the Steam Deck whether docked or when used in portable mode. The dock also supports Ethernet network connectivity and support for USB connections for controllers or other input devices. The Deck can also work with any third-party docking station that supports similar types of interfacing for portable devices. External GPUs are not supported.
Software
Steam Deck runs a modified Arch Linux operating system called SteamOS v3.0. While SteamOS had been previously developed for Steam Machines using Debian Linux, Valve stated that they wanted to use a rolling upgrade approach for the Deck's system software, a function Debian was not designed for but was a feature of Arch Linux. An application programming interface (API) specific for the Steam Deck is available to game developers, allowing a game to specify certain settings if it is being run on a Steam Deck compared to a normal computer. Within the Steam storefront, developers can populate a special file depot for their game with lower-resolution textures and other reduced elements to allow their game to perform better on the Steam Deck; Steam automatically detects and downloads the appropriate files for the system (whether on a computer or Steam Deck) when the user installs the game.
The SteamOS software includes support for Proton, a compatibility layer that allows most games developed for Microsoft Windows to be played on the Linux-based SteamOS. According to ProtonDB, a user-run database that compiles information on game compatibility of Steam games within Linux using Proton, several of Steam's more popular game releases were not yet compatible with Proton primarily due to anti-circumvention and anti-cheat controls or digital rights management (DRM). Valve stated they were working with vendors of these middleware solutions to improve Proton support while also encouraging Linux-specific versions to be developed. Epic Games' Easy Anti-Cheat, one of the more popular DRM options for developers, was made available for MacOS and Linux systems in September 2021, which Epic stated that developers could easily transition for the Proton layer. Valve worked with Epic over the end of 2021 to make the transition of Easy Anti-Cheat to Proton simple for developers. Another popular DRM solution, Battleye, also affirmed their software was ready to work with the Proton layer and only required developers to opt-in to enable it. Valve stated that in testing games otherwise currently available on Linux or compatible with the Proton layer, they had yet to find a game that failed to meet a minimum 30 frames-per-second performance on the handheld, a performance metric comparable to the consoles of the eighth generation. The Proton layer includes support for AMD's upscaling technology FidelityFX Super Resolution (FSR); while Proton also supports Nvidia's DLSS upscaling solution, this was included in the Deck.
Due to potential confusion on game compatibility, Valve introduced a process in October 2021 by which they brought in additional staff to review games on Steam in order to make sure a game is fully playable on the Steam Deck. Titles that are confirmed to be compatible with the Steam Deck, including those with Proton and any middleware DRM solutions, that by default meet minimum performance specifications, are marked as "Verified". Games that may require some user tinkering with settings, such as having to use a system control to bring up the on-screen keyboard, are tagged as "Playable". Another category, "Unsupported", are games that Valve has tested to not be fully compatible with the Steam Deck, such as VR games or games using Windows-specific codecs that have yet been made compatible with Proton. These ratings are to change over time as both the Steam Deck software improves as well as updates made by developers to games to improve compatibility with the Steam Deck software.
The Steam client on the Deck runs a revised version of the Steam client for desktops. Unlike Steam's Big Picture mode which was designed for use on television screens, which was treated as a separate software branch within Valve, the Deck version of the Steam client stays consistent with the desktop version, adding functions and interface elements to make navigating through Steam easier with controller input, and indicators typical for portable systems such as battery life and wireless connectivity. Valve anticipates phasing out the Big Picture mode in Steam with the Steam Deck user interface in the future. The version of Steam on the Deck otherwise supports all other functions of Steam, including user profiles and friends lists, access to game communities, cloud saving Steam Workshop support, and the Remote Play feature. Remote Play also allows the Steam Deck to be used as a controller for a game running on a computer, providing additional control options beyond traditional keyboard and mouse or common controller systems. The Steam software on the Deck also supports suspending a game in progress, a feature considered by Valve to be core to the Deck. Otherwise, games that do not take advantage of the Steam Deck API have the handheld's controller input automatically converted for them. For example, the touch-sensitive controllers on the Deck translate input appropriately for games that typically rely on keyboard and mouse controls. Valve added to Steam's current approach to cloud saving with the introduction of Dynamic Cloud Sync in January 2022. Prior cloud functionality only synchronized game saves after the user has exited a game; developers can enable Dynamic Cloud Sync to use cloud saving while the game is running, making this feature more amenable for portable use on the Steam Deck.
Users download games onto the Steam Deck to store on either the internal storage or SD card, each storage device treated as a separate Steam Library for games. This allows SD cards with different Steam Libraries to be swapped in and out. Valve is exploring the ability to pre-load games on an SD card outside of the Deck, such as through a personal computer.
While the Deck was designed for playing Steam-based games, it can be loaded with third-party software, such as alternative storefronts like Epic Games Store, Ubisoft Connect, or Origin. The user can also choose to replace SteamOS with a different operating system entirely, as it supports multi-booting. The device's built-in browser supports Xbox Cloud Gaming, allowing those with Xbox Game Pass subscriptions access to that library of games. Newell stated that Valve would support Microsoft in bringing Xbox Game Pass to Steam and Steam Deck if they want that route.
As part of the Steam Deck's launch, Valve released Aperture Desk Job, a spinoff game in the Portal series, for free on March 1, 2022, available to all Windows and Linux/SteamOS users and preloaded on the Steam Deck. The game is designed to demonstrate the various features of the Steam Deck, though is still playable with a controller for other systems.
Reception
The initial reaction to the announcement of the Steam Deck was positive. Epic Games' Tim Sweeney and Xbox Game Studios' Phil Spencer complimented Valve on the Steam Deck, with Sweeney calling it an "amazing move by Valve!" Spencer congratulated Valve "on getting so many of us excited to be able to take our games with us wherever we decide to play".
Many outlets compared the unit to that of the Nintendo Switch, generally recognized as the first true hybrid video game console. Valve stated that they did not really consider the Switch in designing the Deck, as they "tried to make all the decisions really in Steam Deck that targeted that audience and that served the customers that were already having a good time interacting with the games that are on that platform, on our platform", and that by happenstance, came out with a device that was similar in function to the Switch. The Verge stated that generally, the Steam Deck was a more powerful machine compared to the Switch, but that power came with a tradeoff in battery life which was greater with the Switch. Further, The Verge recognized that the specifications of the Deck were more comparable to the power of the consoles of the eighth generation like the Xbox One and PlayStation 4, though using more recent compute/micro- and graphics architectures than that which powered those older systems. Kotaku stated that while the Deck and Switch may be similar in concept, the two were not competing devices due to their target demographics, with the Switch aimed more at a broad audience machine, while the Deck was geared towards more "hardcore" gamers. Digital Foundry identified that while the Deck's hardware may be more powerful, developers are not necessarily able to get low-level access to the CPU/GPU as developers working on the Switch can. While Switch games can be heavily optimized for that system, optimizing games for the Deck may be hampered, according to Digital Foundry, furthering these two systems in terms of competition.
One of the main criticisms of the Steam Deck highlighted by multiple reviewers has been its battery life. Matt Hanson writing for TechRadar stated, "Less welcome is the fact that the battery life of the Steam Deck is pretty poor, with it just about managing one and a half hours while playing God of War" and that "Unfortunately during our time with the Steam Deck, battery life is an issue." He expanded by saying "That’s going to upset a lot of people who may have been planning on using the Steam Deck for long flights, for example" and that "it certainly makes this portable gaming system feel less… well, portable." Matt Miller writing for Game Informer said that "Battery life is a significant problem" and that the device's battery life was "Punishingly low". Steve Hogarty writing for The Independent said "The battery life is by far the Steam deck’s biggest weakness. The handheld PC chugs through juice like it’s going out of fashion, with some graphically demanding games draining a full charge in as little as two hours of playtime." Seth G. Macy writing for IGN wrote in very similar terms, saying, "Beyond that limitation, the biggest, most deflating issue I’ve had has been battery life. It’s all over the place and probably the biggest reality check when it comes to realizing the dream of truly untethered PC gaming." In terms of resolving this problem Richard Leadbetter writing for Eurogamer said they "can't help feel that elements like fan noise and battery life can only be resolved with a revised processor on a more efficient process node."
References
External links
2022 in video gaming
Computer-related introductions in 2022
Handheld game consoles
Handheld personal computers
Products introduced in 2022
Steam (service) |
39296737 | https://en.wikipedia.org/wiki/St.%20Paul%20College%20of%20Ilocos%20Sur | St. Paul College of Ilocos Sur | Saint Paul College of Ilocos Sur, also referred to by its acronym SPCIS or SPC Ilocos Sur, is a private Catholic basic and higher education institution run by the Sisters of Saint Paul of Chartres in Bayubay, San Vicente, Ilocos Sur. It is the oldest private school in Ilocos Sur, Philippines and is a member school of the Saint Paul University System. It was founded in 1905.
History
The school was originally established in Vigan, Ilocos Sur, in 1905 by the Sisters of St. Paul of Chartres. In 1911, it was incorporated as the "Girls College of Our Lady of the Rosary." Secondary Education was introduced on 1912;, a Junior Normal College was started in 1946, providing courses in the Liberal Arts, an Elementary course in Piano (1950), and a four-year Bachelor of Science in Elementary Education.
In 1961 the name became "Rosary College of Vigan, Incorporated" and in 1969, "St. Paul College of Ilocos Sur." Male students were first accepted in the College Department in 1965. The school expanded to a new site in Bayubay, San Vicente, starting in 1997.
On June 29, 2010, St. Paul University System incorporated St. Paul College of Ilocos Sur as an affiliate member of the SPUS whose other members are St. Paul University Philippines at Tugegarao City, St. Paul University Dumaguete, St. Paul University Iloilo, St. Paul University Manila, St. Paul University Quezon City, and St. Paul University Surigao.
Academic Programs
Graduate School
There are post graduate programs in of Business, Education, and Information Technology. These consists of onsite and online classes.
College
Department of Arts, Sciences and Teacher Education
Bachelor of Arts (AB) with majors in English, Mathematics, Filipino, and Religious Education
Bachelor of Elementary Education (BEEd)
Bachelor of Secondary Education (BSEd) with majors in Biological Sciences, English, Mathematics, Filipino, Home Economics, and Religious Education
Department of Nursing
Bachelor of Science in Nursing (BSN)
Department of Business Education
Bachelor of Science in Accountancy (BSA)
Bachelor of Science in Business Administration (BSBA) major in Financial Management and Human Resources Management
Bachelor of Science in Entrepreneurship (BSEntrepreneur)
Bachelor of Science in Information Technology (BSIT)
Department of Hospitality and Tourism Management
Bachelor of Science in Hospitality Management (BSHM)
Bachelor of Science in Tourism Management (BSTM)
Associate in Hotel and Restaurant Services (HRS)
Associate in Tourism
Senior High School
Academic Track
Accountancy, Business, and Management (ABM)
General Academics Strand (GAS)
Humanities and Social Sciences (HumSS)
Science, Technology, Engineering, and Mathematics (STEM)
Technical-Vocational Track
Food and Beverages Services
Information Technology
Local Guiding Services
Junior High School
Special Science High School Curriculum
Grade 7-10
Grade School
Kindergarten
Grades 1 to 6
ECA
MTG, ICAS, AMC, IMAS, M-Tap
School Accreditation
(PAASCU Accredited Level III)
Grade School
High School Department
(PAASCU Re-Accredited Level II)
College Department - Department of Arts, Sciences and Teacher Education
References
External links
Universities and colleges in Ilocos Sur
Catholic universities and colleges in the Philippines
Catholic elementary schools in the Philippines
Catholic secondary schools in the Philippines
Graduate schools in the Philippines
1905 establishments in the Philippines
Educational institutions established in 1905 |
48753129 | https://en.wikipedia.org/wiki/Howard%20Shane | Howard Shane | Howard C. Shane is director of the Autism Language Program and Communication Enhancement Program at Children's Hospital in Boston, Massachusetts, former director of the Institute on Applied Technology, and associate professor at Harvard Medical School. He is internationally known for his research and development of augmented and alternative communication systems to support the communication needs of people with neuromuscular disorders, autism and other disabilities.
Education
Shane graduated from the University of Massachusetts at Amherst in 1969 with a B.A. in sociology. He went on to earn an M.A. in speech pathology and audiology in 1972 (also from the University of Massachusetts) and a PhD in speech pathology in 1975 from Syracuse University. He completed a doctoral fellowship in 1975 at the Mayo Clinic.
Career
Shane began his career as an assistant professor of communication sciences at the University of Vermont in Burlington, Vermont (1975–1977). He served as associate professor at Emerson College, Department of Communication Studies, (1977–1995), and visiting associate professor for the University of Massachusetts (1985–1990). Shane was an assistant professor at Harvard Medical School's Department of Otology and Laryngology (1986–1995) before becoming an associate professor (1996–present). Shane is also a professor of communication science and disorders at MGH Institute of Health Professions (1997–present). In 1977, Shane was appointed associate scientist of otolaryngology at Boston Children's Hospital.
Also in 1977, Shane was appointed the director of speech pathology and audiology at the Developmental Evaluation Clinic at Children's Hospital Boston. and held that position until 1991. In 1985, he was appointed director of the Communication Enhancement Center (CED), the augmentative communication program, at Boston Children's Hospital. In 2005, he assumed leadership of the Center for Communication Enhancement (which encompassed the old CEC and five other programs), a role he continues to hold .
For his lifetime achievements at Boston Children's Hospital, Shane was awarded the Center for Communication Enhancement's inaugural Directorship Chair in 2015. The endowed Chair, which will be named for him in future, is funded by the Boston Children's Hospital Otolaryngologic Foundation.
Communication and technology
Shane has spent much of his career researching and developing assistive technologies that support children and adults, including Stephen Hawking, whose ability to communicate in spoken or written language forms is "limited by autism, cerebral palsy, language disorders, spinal cord injuries, or neuromuscular diseases." The systems have become so refined that a person does not need dexterity to activate a computer on their own and select letters, words or pre-programmed phrases from a screen. Small muscle movements suffice. Finger twitches, head nods and eye blinks, as well as the spoken voice for those with that ability, are all that is required for individuals to communicate independently.
Touch 'N Speak
In 1983, Shane directed a program through his Institute on Technology to create technology solutions for students at Boston College. Other team members included Allen Field from Boston Children's Hospital , Katharyn Dawson, a speech and language pathologist, and Don Ricciato, principal of the school. The team was dedicated to designing and implementing teaching tools to assist people who were unable to speak in their efforts to communicate. The school worked with students whose ages ranged from 10 to 25 and who exhibited a wide range of neurological, physical and intellectual challenges.
This collaboration led to the creation of Touch 'N Speak, a software program that allowed students to use movement (i.e., of an elbow or head), to activate touch-sensitivity keyboards to access pre-programmed messages and activate a vocal mechanism. This also marked the first time that a computer (an Apple IIe) was successfully mounted on a wheelchair. Ground-breaking at the time, this was one of the first innovations in the field of "augmentative communication," recognized as a valid form of communication by the American Speech–Language–Hearing Association in 1981.
Microsystems Software
In 1989, Shane consulted with the programmers of Microsystems Software Inc., a company owned by Richard and Deborah Gorgens to develop software packages to assist people with disabilities in their efforts to participate in the workplace. The result was HandiWare, a collection of computer programs that ran on IBM-compatible PCs and sought to assist people with "physical impairments, visual impairments and those individuals requiring computer-aided speech." HandiChat, targeted for people with speech impairments, allowed individuals to type on a keyboard and have their words spoken through a DECtalk speech synthesizer. HandiWord, a "word prediction program", was responsive to individuals' most frequently used words and finished spelling out words based on the first few letters. At the time these programs were being introduced, the Americans with Disabilities Act was newly enacted. Communication technologies like HandiWare enabled individuals who had not worked in 10–15 years because of their communication difficulties return to and become productive members of the work place.
Starbright World
In 1995, a computer-generated play world called Starbright World was made available to the children's hospital, as well as other hospitals in New York, Pittsburgh and California, that allowed patients to connect through cyberspace. Starbright World, an interactive network financed by Steven Spielberg, was intended to help patients with serious and chronic illnesses escape in the world of play and, if desired, connect with others facing the same kind of diagnosis and treatment. Initially, access to Starbright World relied on a child's ability to use a mouse or type on a keyboard. Shane and his team worked to find alternate navigation techniques for children whose motor skills were impaired or for those who did not yet have the required computer skills.
Monarch School
In 2002, the Monarch School for Autism, in Shaker Heights, Ohio, began a collaboration with Boston Children's Hospital, and MGH Institute of Health Professionals to support children with autism in their use of communication and development of life skills. The Monarch School for Autism, an intensive one-one-one, language-enrichment program, was the first of its kind in Ohio, serving only children with autism whose needs are often under-served in the public school system. At the time, Shane had more than 30 years experience assisting people with autism and, as a result, had developed computer software specifically designed to "boost verbal communication skills" using visual information. The focus of the collaboration, headed by Shane, was to develop a curriculum using the software and other complimentary technologies reliant on visual cuing that could serve as a model for educational programs throughout the U.S.
Visual Immersion System
Shane led a team to develop the Visual Immersion System (VIS), a visual curriculum to support the communication needs of people with disabilities. The curriculum makes use of communication technology, including the iPad, which allows people with autism to engage in visual activities that aid in the development of language skills. The effectiveness of the program is currently under study, with clinical evidence "still emerging," but, as Shane states, "the excitement and interest in these technologies exist because they are working."
Facilitated Communication
Facilitated Communication (FC), popularized in the early 1990s in the United States by Douglas Biklen, is an alleged communication technique in which the facilitator (usually a parent, educator, or caregiver) holds the hand, shoulder or arm of a person with disabilities in order to type on a paper letter board or mechanical keyboard.
Shane first learned of FC in Sweden, when he attended an International Society for Augmentative and Alternative Communication (ISAAC) conference where Rosemary Crossley gave a speech.. He questioned claims from promoters that individuals with severe disabilities, some as young as 5 or 6 years old, without formal training in reading or written language could produce messages that included "perfectly spelled sentences" and whether or not the communications were originating from the children or the adult facilitators. On the 1993 Frontline show, Prisoners of Silence, produced by Jon Palfreman, Shane questioned the sophistication of the sentences being typed using FC. Students, with the help of facilitators, were typing out grammatically correct and accurately spelled sentences that, according to Shane, held "insights that go far beyond their years." Proponents maintained that these children learned language and written language skills by being "immersed in language-rich environments."
Already versed in communications technology that allowed people with autism and a wide range of physical disabilities to communicate independently and without someone else's touch, Shane criticized FC as "bogus nonsense" and "a complete waste of time."
Shane became involved with FC further when he was called as an expert witness in a court case. The parents were accused of sexual abuse through facilitator Janyce Boynton who using FC with their autistic child. Shane established simple double-blind protocols to test the validity of the message and determine authorship of the messages. The results indicated that not only was the child incapable of typing out the messages, the content produced was based on facilitator's knowledge of the materials presented. In his book, Facilitated Communication: The Clinical and Social Phenomenon, Shane outlines a "wide range of tasks and procedures," that practitioners can use to establish the source of the facilitated messages. Since that first court case, Shane has continued to serve as an expert witness with the results, to date, always the same. Rather than support people with autism in their efforts to communication, FC, according to Shane is "hurtful and harmful" and deprives "children of their right to independently communicate." Facilitator Boynton, realizing she had been the one doing the communication pressured her school administration to err on the side of caution and end the practice of FC. Over the years Shane kept in touch with Boynton and continued to encourage her to speak out about FC.
Shane believes that FC messages originate from facilitators who "subconsciously guide the hands and fingers of people they are assigned to help." via the ideomotor phenomenon. With regard to the ideomotor phenomenon and FC, the facilitators become so absorbed in the typing process, they are unaware of their own movements while holding onto their disabled communication partner. Shane stated on Prisoners of Silence: You can't be a one-finger typist and not look at the keyboard. You just can't get oriented. You don't have a home position. And when you watch children who are F/C – facilitated communication – users, they may not be looking at the language board, but the facilitators are not taking their eyes off it. They're fixed on it.
Critics of Shane's stance on FC claim that testing is unfair to the person with disabilities who, they claim, might exhibit test anxiety or "freeze in their ability to respond."
Donald P. Oswald, who reviewed The Facilitated Communication: Clinical and Social Phemenon, praised the book for its "valuable perspectives on the FC story," citing chapters written by Jon Palfreman, Gina Green, Wolf Wolfensberger, Barry Prizant, and Shane that provide a "preliminary retrospective of the FC fad in the United States," but criticized the book for its sometimes "dispassionate discourse." He wrote: The authors in this work occasionally reveal the personal distress they have experienced and as a result, at times the tone of their writing is defensive or aggressive. Nonetheless, this book offers valuable perspectives on the FC story and, depending on the reader's personal position, will stimulate, enlighten and, at times, enrage.
Supporters of FC state that often people start out using a facilitator and eventually learn to type without physical support. Shane responded by saying '"If someone’s going to be a typist, they don’t need somebody to facilitate them."'
Memberships and appointments
SpecialNeedsWare Advisory Board (Chair, 2012)
Director, Model Autism Program (MAP), Boston Public Schools (2006)
Director, Clinical and Research Liaison, Monarch School for Autism (2002)
Conferences and summits
Panelist with Matthew Goodwin, director of clinical research at MIT Media Lab and co-director of the Autism Technology Initiative at MIT, at the Third Annual Summit on Autism, hosted by Kids Institute for Development and Advancement (KiDA), University of California. Topic: technological advancements and their impact on autism. (September 17, 2011)
Behavior Analysis Association of Michigan (BAAM) Conference, Eastern Michigan University, keynote speaker: "Using Technology to Educate Persons with Autism Spectrum Disorders: Do Professionals Get a Passing Grade?" (2009)
The Herbert J. and E. Jane Oyer Annual Lecture on Communication Disorders and Human Development, Michigan State University. Topic: applying the visual strengths of person on the autism spectrum to communication intervention. (2008)
Awards and honors
Frank R. Kleffner Lifetime Clinical Career Award (2019), conferred on November 22, 2019 at the annual America Speech-Language-Hearing Foundation Founders Breakfast in Orlando, Florida
Award for Significant Contributions to the fields of public health and science (2017), presented by the School of Public Health and Sciences at UMassAmherst
Honors of ASHA Award (2007), presented by the American Speech–Language–Hearing Association
Teacher of the year, MGH Institute of Health Professions (2002)
Goldenson Award for Innovations in Technology, presented by the United Cerebral Palsy Association (2000)
American Speech–Language–Hearing Association (fellow since 1989)
Kleffner Clinical Achievement Award for Technology, Massachusetts Speech-Language-Hearing Association (1995)
Pioneer Award for technology in clinical practice, Massachusetts Federation of the Council for Exceptional Children (1993)
Finalist for technology innovation, Smithsonian Institution Computerworld (1989)
Select books
Unsilenced: A Teacher's Year of Battles, Breakthroughs, and Life-Changing Lessons at Belchertown State School (2021)
Enhancing Communication For Individuals With Autism, with Jennifer S. Abramson, Kara Corley, Holly Fadie, Suzanne Flynn, Emily Laubscher, Ralf Schlosser, and James Sorce. Foreword by Connie Kasari. (2015)
Visual Language in Autism, with Sharon Weiss-Kapp (2007)
The Children's Hospital Guide to Your Child's Health and Development, with Margaret A. Kenna and Alan D. Woolf (2001)
Facilitated Communication: The Clinical and Social Phenomenon (Editor) (1994)
Select articles
The Persistence of Fad Interventions in the Face of Negative Scientific Evidence: Facilitated Communication for Autism as a Case Example, with Scott O. Lillienfeld, Julia Marshall, and James T. Todd (2015)
Applying Technology to Visually Support Language and Communication in Individuals with Autism Spectrum Disorders, with Emily Laubscher, Ralf Schlosser, Suzanne Flynn, James Sorce, and Jennifer Abramson (2012)
Using AAC Technology to Access the World, with Sarah Blackstone, Gregg Venderheiden, Michael Williams, and Frank DeRuyter (2012)
Animation of Graphic Symbols Representing Verbs and Prepositions: Effects on Transparency, Name Agreement, and Identification," with Ralf Schlosser, James Sorce, Rajinder Koul, Emma Frances Bloomfield, and Lisa Debrowski (2011)There Isn't Always an App for That!, with Jessica Gosnell and John Costello (2011)Use of a Visual Graphic Language System to Support Communications for Persons on the Autism Spectrum, with M. O'Brien, and James Source (2009)Electronic Screen Media for Persons with Autism Spectrum Disorders: Results of a Survey, with Patti Ducoff Albert (2008)Facilitated Communication as an Ideomotor Effect, with Cheryl A. Burgess, Irving Kirsch, Kristen L. Niederauer, Steven M. Graham, Alyson Bacon (1998)An Examination of the Role of the Facilitator in "Facilitated Communication," with Kevin Kearns (1994)Selection of Augmentative Communication Systems: American Speech-Language-Hearing Association (1985)Decision Making in Early Augmentative Communication System Use (1981) Election Criteria for the Adoption of an Augmentative Communication System: Preliminary Considerations'', with Anthony S. Bashir (1980)
References
American medical writers
Autism researchers
Harvard Medical School faculty
Living people
People from Leominster, Massachusetts
Speech and language pathologists
Syracuse University alumni
University of Massachusetts Amherst College of Social and Behavioral Sciences alumni
Year of birth missing (living people) |
88823 | https://en.wikipedia.org/wiki/AmigaDOS | AmigaDOS | AmigaDOS is the disk operating system of the AmigaOS, which includes file systems, file and directory manipulation, the command-line interface, and file redirection.
In AmigaOS 1.x, AmigaDOS is based on a TRIPOS port by MetaComCo, written in BCPL. BCPL does not use native pointers, so the more advanced functionality of the operating system was difficult to use and error-prone. The third-party AmigaDOS Resource Project (ARP, formerly the AmigaDOS Replacement Project), a project begun by Amiga developer Charlie Heath, replaced many of the BCPL utilities with smaller, more sophisticated equivalents written in C and assembler, and provided a wrapper library, arp.library. This eliminated the interfacing problems in applications by automatically performing conversions from native pointers (such as those used by C or assembler) to BCPL equivalents and vice versa for all AmigaDOS functions.
From AmigaOS 2.x onwards, AmigaDOS was rewritten in C, retaining 1.x compatibility where possible. Starting with AmigaOS 4, AmigaDOS abandoned its legacy with BCPL. Starting from AmigaOS 4.1, AmigaDOS has been extended with 64-bit file-access support.
Console
The Amiga console is a standard Amiga virtual device, normally assigned to CON: and driven by console.handler. It was developed from a primitive interface in AmigaOS 1.1, and became stable with versions 1.2 and 1.3, when it started to be known as AmigaShell and its original handler was replaced by newconsole.handler (NEWCON:).
The console has various features that were considered up to date when it was created in 1985, like command template help, redirection to null ("NIL:"), and ANSI color terminal. The new console handler – which was implemented in release 1.2 – allows many more features, such as command history, pipelines, and automatic creation of files when output is redirected. When TCP/IP stacks like AmiTCP were released in the early 1990s, the console could also receive redirection from Internet-enabled Amiga device handlers (e.g., TCP:, ).
Unlike other systems originally launched in the mid-1980s, AmigaDOS does not implement a proprietary character set; the developers chose to use the ANSI–ISO standard ISO-8859-1 (Latin 1), which includes the ASCII character set. As in Unix systems, the Amiga console accepts only linefeed ("LF") as an end-of-line ("EOL") character. The Amiga console has support for accented characters as well as for characters created by combinations of 'dead keys' on the keyboard.
Syntax of AmigaDOS commands
This is an example of typical AmigaDOS command syntax:
{|style="background:transparent"
| style="vertical-align:top;"| 1> Dir DF0:
|-
|
Without entering the directory tree, this shows the content of a directory of a floppy disk and lists subdirectories as well.
|-
| style="vertical-align:top;"| 1> Dir SYS: ALL
|-
|
The argument "ALL" causes the command to show the entire content of a volume or device, entering and expanding all directory trees. "SYS:" is a default name that is assigned to the boot device, regardless of its physical name.
|}
Command redirection
AmigaDOS can redirect the output of a command to files, pipes, a printer, the null device, and other Amiga devices.
{|style="background:transparent"
| style="vertical-align:top;"| 1> Dir > SPEAK: ALL
|-
|
Redirects the output of the "dir" command to the speech synthesis handler. The colon character ":" indicates that SPEAK: points to an AmigaDOS device. While a typical use for a device is file systems, special-purpose device names such as this are commonly used in the system.
|}
Command template
AmigaDOS commands are expected to provide a standard "template" that describes the arguments they can accept. This can be used as a basic "help" feature for commands, although third-party replacement console handlers and shells, such as Bash or Zshell (ported from Unix), or KingCON often provide more verbose help for built-in commands.
On requesting the template for the command "Copy", the following output is obtained:
{|style="background:transparent"
| style="vertical-align:top;"| 1> Copy ?
|-
|
|-
| style="vertical-align:top;"| FROM, TO/A, ALL/S, QUIET/S
|-
|
This string means that the user must use this command in conjunction with FROM and TO arguments, where the latter is compulsory (). The argument keywords ALL and QUIET are switches () and change the results of the command Copy (ALL causes all files in a directory to be copied, while QUIET will cause the command to generate no output).
|}
By reading this template, a user can know that the following syntax is acceptable for the command:
{|style="background:transparent"
|-
| style="vertical-align:top;"|Copy DF0:Filename TO DH0:Directory/Filename
|}
Breaking commands and pausing console output
A user can terminate a program by invoking the key combination or . Pressing or any printing character on the keyboard suspends the console output. Output may be resumed by pressing the key (to delete all of the input) or by pressing (which will cause the input to be processed as a command as soon as the current command stops running).
Wildcard characters
Like other operating systems, AmigaDOS also provides wildcard characters that are substitutes for any character or any sequence of random characters in a string. Here is an example of wildcard characters in AmigaDOS commands:
{|style="background:transparent"
| style="vertical-align:top;"| 1> Dir #?.info
|-
|
searches the current directory for any file containing ".info" at its end as suffix, and displays only these files in the output.
|}
The parsing of this is as follows. The "?" wildcard indicates "any character". Prefixing this with a "#" indicates "any number of repetitions". This can be viewed as analogous to the regular expression ".*".
Scripting
AmigaDOS also has the feature of dealing with batch programming, which it calls "script" programming, and has a number of commands such as Echo, If, Then, EndIf, Val, and Skip to deal with structured script programming. Scripts are text-based files and can be created with AmigaDOS's internal text editor program, called Ed (unrelated to Unix's Ed), or with any other third-party text editor. To invoke a script program, AmigaDOS uses the command Execute.
{|style="background:transparent"
| style="vertical-align:top;"| 1> Execute myscript
|-
|
executes the script called "myscript".
|}
This method of executing scripts keeps the console window busy until the script has finished its scheduled job. Users cannot interact with the console window until the script ends or until they interrupt it.
While:
{|style="background:transparent"
| style="vertical-align:top;"| 1> Run Execute myscript
|-
|
The AmigaDOS command "Run" executes any DOS command or any kind of program and keeps the console free for further input.
|}
Protection bits
Protection bits are flags that files, links and directories have in the filesystem. To change them one can either use the command Protect, or use the Information entry from the Icons menu in Workbench on selected files. AmigaDOS supports the following set of protection bits (abbreviated as HSPARWED):
H = Hold (reentrant commands with the P-bit set will automatically become resident on first execution. Requires E, P and R bits set to work. Does not mean "Hide". See below.)
S = Script (Batch file. Requires E and R bits set to work.) If this protection bit is set on, then AmigaDOS is able to recognize and automatically run a script by simply invoking its name. Without S bit scripts can still be launched using the Execute command.
P = Pure (indicates reentrant commands that can be made resident in RAM and then no longer need to be loaded any time from flash drives, hard disks or any other media device. Requires E and R bits set to work.)
A = Archive (Archived bit, used by various backup programs to indicate that a file has been backed up)
R = Read (Permission to read the file, link or content of directory)
W = Write (Permission to write the file, link or inside a directory)
E = Execute (Permission to execute the file or enter the directory. All commands need this bit set, or they won't run. Requires R bit set to work.)
D = Delete (Permission to delete the file, link or directory)
The H-bit has often been misunderstood to mean "Hide". In Smart File System (SFS) files and directories with H-bit set are hidden from the system. It is still possible access hidden files but they don't appear in any directory listings.
Demonstration of H-bit in action:
Notice how the list command becomes resident after execution when the H-bit is set.
Local and global variables
As any other DOS, Amiga deals with environment variables as used in batch programming.
There are both global and local variables, and they are referred to with a dollar sign in front of the variable name, for example $myvar. Global variables are available system-wide; local variables are only valid in the current shell. In case of name collision, local variables have precedence over global variables. Global variables can be set using the command SetEnv, while local variables can be set using the command Set. There are also the commands GetEnv and Get that can be used to print out global and local variables.
The examples below demonstrate simple usage:
1> setenv foo blapp
1> echo $foo
blapp
1> set foo bar
1> echo $foo
bar
1> getenv foo
blapp
1> get foo
bar
1> type ENV:foo
blapp
1> setenv save foo $foo
1> type ENV:foo
bar
1> type ENVARC:foo
bar
Note the save flag of the SetEnv command and how global variables are available in the filesystem
Global variables are kept as files in ENV:, and optionally saved on disk in ENVARC: to survive reboot and power cycling. ENV: is by default an assign to RAM:Env, and ENVARC: is an assign to SYS:Prefs/Env-archive where SYS: refers to the boot device. On bootup, the content of ENVARC: is copied to ENV: for accessibility.
When programming AmigaDOS scripts, one must keep in mind that global variables are system-wide. All script-internal variables shall be set using local variables, or one risks conflicts over global variables between scripts. Also, global variables require filesystem access, which typically makes them slower to access than local variables.
Since ENVARC: is also used to store other system settings than just string variables (such as system settings, default icons and more), it tends to grow large over time, and copying everything over to ENV: located on RAM disk becomes expensive. This has led to alternative ways to set up ENV: by using dedicated ramdisk handlers that only copy files over from ENVARC: when the files are requested. Examples of such handlers are and.
An example demonstrating creative abuse of global variables as well as Lab and Skip is the AmigaDOS variant of the infamous GOTO.
Case sensitivity
AmigaDOS is in general case-insensitive. Indicating a device as "Dh0:", "DH0:" or "dh0:" always refers to the same partition; however, for file and directory names, this is filesystem-dependent, and some filesystems allow case sensitivity as a flag upon formatting. An example of such a file system is Smart File System. This is very convenient when dealing with software ported over from the mostly case-sensitive Un*x world, but causes much confusion for native Amiga applications, which assume case insensitivity. Advanced users will hence typically only use the case sensitivity flag for file systems used for software originating from Un*x.
Re-casing of file, directory and volume names is allowed using ordinary methods; the commands "rename foo Foo" and "relabel Bar: bAr:" are valid and do exactly what is expected, in contrast to for example on Linux, where "mv foo Foo" results in the error message "mv: `foo' and `Foo' are the same file" on case-insensitive filesystems like VFAT.
Volume naming conventions
Partitions and physical drives are typically referred to as DF0: (floppy drive 0), DH0: (hard drive 0), etc. However, unlike many operating systems, outside of built-in physical hardware devices like DF0: or HD0:, the names of the single disks, volumes and partitions are totally arbitrary: for example a hard disk partition could be named Work or System, or anything else at the time of its creation. Volume names can be used in place of the corresponding device names, so a disk partition on device DH0: called Workbench could be accessed either with the name DH0: or Workbench:. Users must indicate to the system that "Workbench" is the volume "Workbench:" by always typing the colon ":" when they are entering information in a requester form or into AmigaShell.
If an accessed volume name cannot be found, the operating system will prompt the user to insert the disk with the given volume name, or allow the user to cancel the operation.
In addition, logical device names can be set with the "assign" command to any directory or device; programs often assigned a virtual volume name to their installation directory (for instance, a fictional wordprocessor called Writer might assign Writer: to DH0:Productivity/Writer). This allows for easy relocation of installed programs. The default name SYS: is used to refer to the volume that the system was booted from. Various other default names are provided to refer to important system locations. e.g. S: for startup scripts, C: for AmigaDOS commands, FONTS: for installed fonts, etc.
Assignment of volume labels can also be set on multiple directories, which will be treated as a union of their contents. For example, FONTS: might be assigned to SYS:Fonts, then extended to include, for example, Work:UserFonts using the add option of the AmigaDos assign command. The system would then permit use of fonts installed in either directory. Listing FONTS: would show the files from both locations.
Conventions of names and typical behaviour of virtual devices
The physical device shares the same floppy drive mechanics with , which is the CrossDOS virtual device capable of reading PC formatted floppy disks. When any PC formatted floppy disk is inserted into the floppy drive, then the floppy Amiga icon will change to indicate that the disk is unknown to the normal Amiga device, and it will show four question marks as the standard "unknown" volume name, while the icon will appear revealing the name of the PC formatted disk. Any disk change with Amiga formatted disks will invert this behaviour.
File systems
AmigaDOS supports various filesystems and variants. The first filesystem was simply called Amiga FileSystem, and was suitable mainly for floppy disks, because it did not support automatic booting from hard disks (on floppy, booting was done using code from the bootblock). It was soon replaced by FastFileSystem (FFS), and hence the original filesystem was known by the name of "Old" FileSystem (OFS). FFS was more efficient on space and quite measurably faster than OFS, hence the name.
With AmigaOS 2.x, FFS became an official part of the OS and was soon expanded to recognise cached partitions, international partitions allowing accented characters in file and partition names, and finally (with MorphOS and AmigaOS 4) long filenames, up to 108 characters (from 31).
Both AmigaOS 4.x and MorphOS featured a new version of FFS called FastFileSystem 2. FFS2 incorporated all of the features of the original FFS including, as its author put it, "some minor changes". In order to preserve backwards compatibility, there were no major structural changes. (However, FF2 on AmigaOS 4.1 differs in that it can expand its features and capabilities with the aid of plug-ins). As with FFS2, the AmigaOS 4 and MorphOS version of Smart FileSystem is a fork of original SFS and are not 100% compatible with it.
Other filesystems like FAT12, FAT16, FAT32 from Windows or ext2 from Linux are available through easily installable (drag and drop) system libraries or third party modules such as FAT95 (features read/write support), which can be found on the Aminet software repository. MorphOS 2 has built-in support for FAT filesystems.
AmigaOS 4.1 adopted a new filesystem called JXFS capable to support partitions over a terabyte of size.
Alternate filesystems from third-party manufacturers include Professional FileSystem, which is a filesystem with an easy structure, based on metadata, allowing high internal coherence, capable of defragmenting itself on the fly, and does not require to be unmounted before being mounted again; and Smart FileSystem which is a journaling filesystem which performs journaled activities during system inactivities, and has been chosen by MorphOS as its standard filesystem.
Official variants of Amiga filesystems
Old File System/Fast File System
OFS (DOS0)
FFS (DOS1)
OFS International (DOS2)
FFS International (DOS3)
OFS Directory Caching (DOS4)
FFS Directory Caching (DOS5)
Fast File System 2 (AmigaOS4.x/MorphOS)
OFS Long filenames (DOS6)
FFS Long filenames (DOS7)
Both DOS6 and DOS7 feature International filenames featured in DOS2 and DO3, but not Directory Caching, which was abandoned due to bugs in the original implementation. DOS4 and DOS5 are not recommended for use for this reason.
Dostypes are backwards compatible with each other, but not forward compatible. A DOS7 formatted disk cannot be read on original Amiga FFS, and a DOS3 disk cannot be read on a KS1.3 Amiga. However, any disk formatted with DOS0 using FFS or FFS2 can be read by any version of the Amiga operating system. For this reason, DOS0 tended to be the format of choice of software developers distributing on floppy, except where a custom filesystem and bootblock was used - a common practice in Amiga games. Where software needed AmigaOS 2 anyway, DOS3 was generally used.
FastFileSystem2 plug-ins
With the July 2007 Update of AmigaOS 4.0 in 2007, the first two plug-ins for FFS2 were released:
fs_plugin_cache: increases performance of FFS2 by introducing a new method of data buffering.
fs_plugin_encrypt: data encryption plug-in for partitions using the Blowfish algorithm.
Filename extensions
AmigaDOS has only a single mandated filename extension: ".info", which must be appended to the filename of each icon. If a file called myprog exists, then its icon file must be called myprog.info. In addition to image data, the icon file also records program metadata such as options and keywords, its own position on the desktop (AmigaOS can "snapshot" icons in places defined by the user), and other information about the file. Directory window size and position information is stored in the ".info" file associated with the directory, and disk icon information is stored in "Disk.info" in the root of the volume.
With the exception of icons, the Amiga system does not identify file types using extensions, but instead will examine either the icon associated with a file or the binary header of the file itself to determine the file type.
See also
Comparison of operating systems
References
Further reading
External links
AmigaOS
Disk operating systems
MorphOS
1985 software |
31665917 | https://en.wikipedia.org/wiki/All%20Watched%20Over%20by%20Machines%20of%20Loving%20Grace%20%28TV%20series%29 | All Watched Over by Machines of Loving Grace (TV series) | {{Infobox television
| image =
| image_size =
| image_alt =
| caption =
| writer = Adam Curtis
| director = Adam Curtis
| country = United Kingdom
| language = English
| num_series = 1
| num_episodes = 3
| executive_producer = Dominic Crossley-Holland
| producer = Lucy KelsallAdam MacqueenJames HarkinAndrew Orlowski
| runtime = 180 minutes (in three parts)
| company = BBC
| network = BBC Two
| first_aired =
| last_aired =
| preceded_by = The Trap (2007)
| followed_by = Bitter Lake (2015)
| website =
}}All Watched Over by Machines of Loving Grace is a BBC television documentary series by filmmaker Adam Curtis. In the series, Curtis argues that computers have failed to liberate humanity, and instead have "distorted and simplified our view of the world around us." The title is taken from a 1967 poem of the same name by Richard Brautigan. The first episode was originally broadcast at 9 pm on 23 May 2011.
Episodes
Part 1. 'Love and Power'
In the first episode, Curtis traces the effects of Ayn Rand's ideas on American financial markets, particularly via the influence of Alan Greenspan, who was a member of a reading group called the Collective, which discussed her work and her philosophy of Objectivism. While Rand's novels were critically savaged, they inspired people working in the technology sector of Silicon Valley, leading to the emergence of the Californian Ideology, a techno-utopian belief that computer networks could measure, control and help to stabilise societies without hierarchical political control. Rand had an affair with Nathaniel Branden, another member of The Collective, with the approval of Brandan's wife, Barbara Branden. The affair would eventually end acrimoniously and the Collective disbanded. Rand's circle of friends contracted considerably, though Greenspan remained loyal to her.
Greenspan entered government in the 1970s and became Chairman of the Federal Reserve. In 1992, he visited the newly elected Bill Clinton and persuaded him to curtail U.S. government intervention in the economy, letting the markets manage themselves with the help of computer modelling to predict risks and hedge against them, a paradigm named "the New Economy". However, by 1996, production figures failed to increase, but profits were nevertheless rising. Greenspan worried that unsustainable speculative bubbles were forming, but after political attacks from all sides, Greenspan changed his reasoning and suggested that new efficiencies had emerged that his data wasn't measuring. In parallel with this, American investors began pouring large sums of money into economies in eastern Asia, though the Council of Economic Advisers, lead by Joseph Stiglitz, began warning that these economies were much more fragile than they seemed. However, these warnings did not reach the president, having been blocked by Robert Rubin, who feared damage to financial interests.
The 1997 Asian financial crisis began as the property bubble in the Far East began to burst in, first in Thailand, then later in South Korea and Indonesia, causing large financial losses in those countries that greatly affected foreign investors. While Bill Clinton was preoccupied with the Monica Lewinsky scandal, Robert Rubin took control of foreign policy and forced loans onto the affected countries. However, after each country agreed to be bailed out by the IMF, foreign investors immediately withdrew their money, destroying their economies and leaving their taxpayers with enormous debts.
Alan Greenspan would rise to greater prominence after his handling of the economic effects of the September 11 attacks, later cutting interest rates in the wake of the Enron scandal in a bid to stimulate the economy. Unusually, this triggered a consumer boom without creating inflation, creating new certainty that the New Economy truly existed. However, in reality, to avoid a repeat of the earlier economic crises in East Asia, China's Politburo had decided to influence America's economy via similar techniques to those used by America on other Far Eastern countries. By keeping China's exchange rate artificially low, they sold cheap goods to America, using the proceeds to buy American bonds. The money flooding into America reduced the perception of risk in signing loans to lower income clients, permitting lending beyond the point that was actually sustainable. The high level of loan defaulting that followed led ultimately to the 2007-08 financial crisis, caused by the collapse of a housing bubble similar to that which Far Eastern countries had previously faced.
In 1994, Carmen Hermosillo published a widely influential essay online, "Pandora's Vox: On Community in Cyberspace", and it began to be argued that the use of computer networks had led not to a reduction in hierarchy, but actually a commodification of personality and a complex transfer of power and information to corporations. Curtis ends the piece by pointing out that not only has the idea of market stability failed to bear out in practice, but that the Californian Ideology has also been unable to bring about long-term stability. Curtis contends that the ideology had not freed its proponents from hierarchies, but has instead trapped them in a rigid system of control from which they are unable to escape.
Contributors
Barbara Branden, member of Ayn Rand's circle, 1950s
John McCaskey, Digital Entrepreneur, Silicon Valley, 1990s
Kevin O'Connor, Internet Entrepreneur, Silicon Valley, 1990s
Loren Carpenter and Rachel Carpenter
Kevin Kelly, Wired Magazine
Stewart Brand, Global Business Network
Alvin Toffler, Digital Futurologist
Peter Schwartz, Global Business Network
Kenichi Ohmae, author, The End of the Nation State Nathaniel Branden, Ayn Rand's lover
Joan Mitchell
Stephen Roach, Chief Economist, Morgan Stanley 1990s
Joseph Stiglitz, Head of the Council of Economic Advisers 1995–97
Robert Rubin, US Secretary of the Treasury 1995–99
Part 2. 'The Use and Abuse of Vegetational Concepts'
This episode investigates how ideas such as cybernetics and systems theory were applied to natural ecosystems, creating a mechanical view of the natural world, and how this relates to the false idea that there is a balance of nature. The idea of ecosystems was proposed in 1935 by Arthur Tansley, an English botanist, based on his belief that the whole of the natural world operated as a series of interconnected networks. Taken together with Jay Forrester's work in cybernetic systems, which posited that all networks are regulated by feedback loops, the belief emerged that the natural world is composed of self-regulating ecosystems that tends towards balance and equilibrium. Norbert Wiener laid out the position that humans, machines and ecology are simply nodes in a network in his book Cybernetics, or Control and Communication in the Animal and the Machine, and this book became the bible of cybernetics. Brothers Howard T. Odum and Eugene Odum, both ecologists, further developed these ideas; Howard collected data from ecological systems and built electronic networks to simulate them, and his brother Eugene then took these ideas and generalised them to the whole of ecology. The idea that the natural world tended towards balance became conventional wisdom among scientists.
In the 1960s, Buckminster Fuller invented a radically new kind of structure, the geodesic dome, which emulated ecosystems in being made of highly connected, relatively weak parts, which built a stronger structure. It was applied to the radomes covering early warning systems in the Arctic. His other system-based ideas inspired the counterculture movement. Communes of people who saw themselves as nodes in a network, without hierarchy, and applied feedback to try to control and stabilise their societies, used his geodesic domes as habitats. Around this time, Stewart Brand filmed a demonstration of a networked computer system with a graphics display, mouse and keyboard that he believed would save the world by empowering people, in a similar way to the communes, to be free as individuals. In 1967, Richard Brautigan published the poetry work All Watched Over by machines of Loving Grace, which promoted the idea of a cybernetic ecological utopia consisting of a fusion of computers and organisms living in perfect harmony and stability.
By the 1970s, new challenges emerged that could not be solved by normal hierarchical systems, such as overpopulation, limited natural resources and pollution. Jay Forrester applied systems theory to the problem and drew a cybernetic system diagram for the world. This was turned into a computer model which predicted population collapse. This became the basis of the model that was used by the Club of Rome, and the findings from this were published in The Limits to Growth. Forrester then argued for zero growth in order to maintain a steady equilibrium within the capacity of the Earth. However, this was opposed by many people within the environmental movement, since the model did not allow for people to change their values to stabilise the world, and they argued that the model tried to maintain and enforce the current political hierarchy. Critics compared Forrester's ideas to a dispute between Arthur Tansley and Field Marshal Jan Smuts. Smuts had invented a philosophy called holism, where everyone had a 'rightful place', which was to be managed by the white race, which Tansley called an "abuse of vegetational concepts." The 70s protestors claimed that the same conceptual abuse of the supposed natural order was occurring, that it was really being used for political control.
The belief in the stability of natural systems began to break down when a study was made of the predator-prey relationship of wolves and elks. It was found that populations of predators and prey had varied wildly over centuries. Other studies then found huge variations, and a significant lack of homeostasis in natural systems. George Van Dyne then tried to build a computer model to try to simulate a complete ecosystem based on extensive real-world data, to show how the stability of natural systems actually worked. To his surprise, the computer model did not stabilise like the Odums' electrical model had. The reason for this lack of stabilisation was that he had used extensive data which more accurately reflected reality, whereas the Odums and other ecologists had "ruthlessly simplified nature." The scientific idea had thus been shown to fail, but the popular idea remained in currency, and even grew as it apparently offered the possibility of a new egalitarian world order.
In 2003, a wave of spontaneous revolutions swept through Asia and Europe. Coordinated only via the internet, nobody seemed to be in overall charge, and no overall aims except self-determination and freedom were apparent. This seemed to justify the beliefs of the computer utopians. However, the freedom from these revolutions lasted for only a short time, with most of the countries falling back into political corruption almost immediately. Curtis compares them with the hippy communes, all of which broke up within a few years, as aggressive members of the group began to bully the weaker ones, who were unable to band together in their own defence because formal power structures were prohibited by the commune's rules, and even intervention against bullying by benevolent individuals was discouraged.
Curtis closes the episode by stating that it has become apparent that while the self-organising network is good at organising change, they fail to provide direction for determining what comes afterwards; networks leave people helpless in the face of those who already wield political power.
Contributors
Peder Anker, historian of ecology
Jay Forrester, systems theorist
Fred Turner, historian of media and technology
Peter J. Taylor, historian of science
Dr Daniel Botkin, ecologist
Randall Gibson, former member of 'Synergia' commune
Molly Hollenbach, former member of 'The Family' commune
Stewart Brand
Alexander King, co-founder of the Club of Rome (archive)
Tord Björk, environmental activist
Dr Steward Pickett, ecologist
Dr Dave Swift and Dr Sam Bledsoe, Grasslands Project
Al Gore, former US Vice President
Dr Laura J. Cameron, historical geographer
Part 3. 'The Monkey in the Machine and the Machine in the Monkey'
This episode looks into the selfish gene theory invented by William Hamilton, which holds that humans are machines controlled by genes. Curtis also covers the source of ethnic conflict that was created by Belgian colonialism's artificial creation of a racial divide and the ensuing slaughter that occurred in the Democratic Republic of the Congo, which is a source of raw material for computers and cell phones.
In the 1930s, Armand Denis made films that told the world about Africa. However, his documentary gave fanciful stories about Rwanda's Tutsis being a noble ruling elite originally from Egypt, whereas the Hutus were a peasant race. In reality, they were racially the same, but the Belgian rulers had ruthlessly exploited the myth to divide the Rwandan people. But when it came to independence, liberal Belgians felt guilty, and decided the Hutus should overthrow the Tutsi rule. This led to a bloodbath, as the Tutsis were then seen as aliens and were slaughtered.
In 1960, Congo had become independent from Belgium, but governance promptly collapsed, and towns became battle grounds as soldiers fought for control of the mines. America and the Belgians organised a coup, and the elected leader, Patrice Lumumba, was kidnapped and executed, causing chaos. However, the Western mining operations were initially largely unaffected. Mobutu Sese Seko was installed as president, killed his opponents and stopped a liberal democracy from forming. Mobutu changed the Congo's name to Zaire, looting millions of dollars and letting mines and industries collapse.
In Congo, with a civil war ongoing, Dian Fossey, who was researching gorillas, was captured. She escaped and created a new camp high up on a mountain in Rwanda, where she continued to study gorillas. She tried to completely protect the gorillas, which were very susceptible to human diseases and were hated because they terrorised the local people. Fossey sabotaged the local people's traps and tried to terrorise them by claiming to cast spells on them. Ultimately, Fossey's favourite gorilla, Digit, was killed by the vengeful locals. Curtis draws a parallel between Fossey and the colonialists who oppressed the Congolese, describing her as one of many westerners who brutalised and terrorised African peoples for their own high-minded ideals.
Bill Hamilton was a solitary man who saw everything through the lens of Darwin's theory of evolution. He wanted to know why some ants and humans give up their life for others. In 1963, he realised that most of the behaviours of humans were due to genes, and he began looking at humans from the genes' point of view. From this perspective, humans were machines that were only important for carrying genes, and it made sense for a gene to sacrifice a human if it meant that another copy of the gene would survive. In 1967, American chemist George R. Price went to London after reading Hamilton's little-known papers and discovering that his equations for the behaviours of genes were equivalent to computers equations. He was able to show that these equations explained murder, warfare, suicide, goodness and spite, since these actions could help the genes. John von Neumann had invented self-reproducing machines, but Price was able to show that the self-reproducing machines were already in existence – humans were such machines.
These revelations had an enormous effect on Price. Previously a staunch rationalist, Price began to believe that these equations had been given to him by God, even though some argue that they are evidence against the existence of God. In 1973, after converting to extreme Christianity as a last chance to disprove the selfish gene theories' gloomy conclusions, Price decided to start helping poor and homeless people, giving away all his possessions in acts of random kindness. These efforts utterly failed, and he came to believe that he was being followed by the hound of heaven. He finally revealed, in his suicide note, that these acts of altruism brought more harm than good to the lives of homeless people. Richard Dawkins later took Hamilton and Price's equations and popularised them, explaining that humans are simply machines created by the selfish genes. Curtis likens this to "reinventing the immortal soul," but as computer code in the form of the genes.
In 1994, the ruling Hutu government set out to eradicate the Tutsi minority. This was explained as incomprehensible ancient rivalry by the Western press. In reality it was due to the Belgian myth created during the colonial rule. Western agencies got involved, and the Tutsi fought back, creating chaos. Many flooded across the border into Zaire, and the Tutsi invaded the refugee camps to get revenge. Mobutu fell from power. Troops arrived from many countries, allegedly to help, but in reality to gain access to the country's natural resources, used to produce consumer goods for the West. Altogether, 4.5 million people were killed.
By this point Hamilton was well-honoured. However, by now he supported eugenics and believed that the help provided to the ill and disabled by modern medicine was counter to the logic of genes. He heard a story that HIV had been created from an accident with a polio vaccine, which it was thought could have been contaminated with a chimp virus. This supported his idea that modern medicine could have negative consequences. Hamilton travelled to Kisangani in the Democratic Republic of the Congo while the Second Congo War was raging. He went there to collect Chimpanzee faeces to test his theory that HIV was due to a medical mistake. While there he caught malaria, for which he took aspirin, which lodged in his gut, causing a haemorrhage which killed him. His hypothesis about the creation of AIDS would ultimately be entirely debunked.
Curtis ends the episode by saying that Hamilton's ideas that humans are computers controlled by the genes have become accepted wisdom. But he asks whether we have accepted a fatalistic philosophy that humans are helpless computers to explain and excuse the fact that, as in the Congo, we are effectively unable to improve and change the world.
Contributors
Prof. Michael Ruse, friend of Bill Hamilton
Kathleen Price, George Price's daughter
Edward Teller (archive)
James Schwartz, George Price's biographer
Bill Hamilton (interviewed 1999)
Interviews and reviews
In May 2011, Adam Curtis was interviewed about the series by Katharine Viner in The Guardian,
by the Register
and by Little Atoms.
Catherine Gee at The Daily Telegraph said that what Adam Curtis reveals, "is the dangers of human beings at their most selfish and self-satisfying. Showing no compassion or consideration for your fellow human beings creates a chasm between those able to walk over others and those too considerate – or too short-sighted – to do so."
John Preston also reviewed the first episode, and said that although it showed flashes of brilliance, it had an "infuriating glibness too as the web of connectedness became ever more stretched. No one could dispute that Curtis has got a very big bite indeed. But what about the chewing, you ask. There wasn't any – or nothing like enough of it to prevent a bad case of mental indigestion."
Andrew Anthony published a review in The Observer and The Guardian, and commented on the central premise that we had been made to "believe we could create a stable world that would last for ever" but that he doesn't "recall ever believing that 'we' could create a stable world that would last for ever", and noted that: "For the film-maker there seems to be an objective reality that a determined individual can penetrate if he is willing to challenge the confining chimeras of markets and machines. Forget the internet tycoons. The Randian hero is Curtis himself."
Music
Curtis's style is typified by the use of frequent and often incongruous cuts of film and music, often lasting only a fraction of a second, in a technique similar to sampling.
Music used in the documentary includes:
See also
Darwin among the Machines
References
External links
Teaser on Adam Curtis's blog
Longer promotional video on Adam Curtis's blog
This article by Adam Curtis in The Observer'' complements the second episode.
2011 British television series debuts
2011 British television series endings
2010s British documentary television series
BBC television documentaries
English-language television shows
Documentary films about computing
Documentary films about politics
Documentary films about science
Documentary films about philosophers
Films about philosophy
Collage film
Films directed by Adam Curtis
Collage television |
487164 | https://en.wikipedia.org/wiki/List%20of%20public%20lecture%20series | List of public lecture series | Recurrent series of notable public lectures are presented in various countries.
General
Australia
The Boyer Lectures delivered by prominent Australians, broadcast annually by the Australian Broadcasting Corporation.
The Errol Solomon Meyers Memorial Lecture, held annually at the University of Queensland in Brisbane.
The George Ernest Morrison Lecture in Ethnology, held annually at the Australian National University in Canberra.
Canada
The Massey Lectures are held at and sponsored by Massey College at the University of Toronto annually
The Watts Lectures are held several times each year at the University of Toronto Scarborough
Denmark
The H.C. Ørsted Lectureship held at and sponsored by The Technical University of Denmark, annually
Public Lectures in Science (In Danish: Offentlige Foredrag i Naturvidenskab) sponsored by Faculty of Science and Technology, Aarhus University, and held in Vejle, Horsens, Herning and Aarhus.
India
Vasant Vyakhyanmala is a traditional annual spring lecture series held in Pune, India, for the last 140 years and hosted by Vaktruttvottejak Sabha.
NITI (National Institute for Transforming India), India lectures series: Transforming India
United Kingdom
The Dingwall Beloe Lecture Series, held at the British Museum annually, intended to make new contributions to the history of horology, with a particular international focus.
Gresham College gives free public lectures since it was founded in 1597
The Reith Lectures, broadcast annually on the BBC, founded in honour of Lord Reith
The Romanes Lectures, on "any topic in the Arts, Science, or Literature", given annually at the University of Oxford founded by George Romanes
The Royal Institution Christmas Lectures have presented scientific ideas to young people in an entertaining manner since 1825.
United States
The Art, Technology, and Culture (ATC) Lecture Series, at University of California, Berkeley in Berkeley, CA
The Charles Eliot Norton Lectures at Harvard University, in Cambridge, MA
Distinctive Voices, sponsored by the National Academy of Sciences, presents lectures on a wide range of scientific and technical topics at the Beckman Center in Irvine, CA and the Jonsson Center in Woods Hole, MA
The Morgenthau Lectures, at the Carnegie Council in New York
Social and political
United States
The Irving E. Carlyle Lecture Series at Wake Forest University
Landon Lecture Series at Kansas State University
Whizin Center Public Lecture Series at the American Jewish University
Aeronautics and astronautics
United States
Evolution of Flight Lecture Series from American Institute of Aeronautics and Astronautics (AIAA)
Computer science
Canada
UBC CS Distinguished Lecture Series
Greece
Distinguished Lecturer Series - Leon The Mathematician] at School of Informatics, Aristotle University of Thessaloniki
United States
CDS Lecture Series at Intelligent Servosystems Laboratory, Institute for Systems Research at University of Maryland, College Park
MURL Lecture Series at Multi-University/Research Laboratory (MURL) as a group of institutions:
School of Computer Science, Carnegie Mellon University;
Laboratory for Computer Science, Massachusetts Institute of Technology;
Microsoft Research;
School of Engineering, Stanford University;
Dept. of Computer Science, University of Washington; and
Xerox Palo Alto Research Center.
History and humanities
United Kingdom
Caird Medal lecture series at the National Maritime Museum
E. A. Lowe Lectures, given triennially at Corpus Christi College, University of Oxford, on palaeography
Ford Lectures, given annually at Oxford University on British history
Lyell Lectures, given annually at Oxford University on the history of the book or bibliography
McKenzie Lectures, given annually at Oxford University on the history of the book, scholarly editing, textual criticism, or bibliography
Panizzi Lectures, given annually at the British Library on a topic in bibliography or book history
Sandars Lectures, given annually at Cambridge University on a topic in bibliography or book history
Lees Knowles Lectures, given annually military history at Trinity College, Cambridge
Raleigh Lectures on History at the British Academy, London, England,
Black History for Action at Lambeth Town Hall, London, England.
Keynes Lectures in Economics at the British Academy, London, England.
United States
Annual History Lecture series from University of Washington Alumni Association (UWAA).
Chicago Humanities Festival
James Ford Bell Lecture series at the University of Minnesota
Jefferson Lecture, annual honorary lecture sponsored by the National Endowment for the Humanities
Massey Lectures (Harvard University)
W. A. Hammond Lecture on the American Tradition, Miami University since 1962
Hungary
Eötvös József Lecture
The Netherlands
Nexus Lectures since 1994
Mosse Lectures, given annually in Amsterdam
Journalism and media studies
United States
Robert C. Vance Distinguished Lecture Series at Central Connecticut State University
Law
United Kingdom
Hamlyn Lectures
Mathematics and mathematical sciences
United Kingdom
Sir David Wallace Lecture Series at Loughborough University
United States
Spring Lecture Series at University of Arkansas
Neurosciences and mind/brain sciences
Australia
The Melbourne Neuroscience Public Lecture Series at The Melbourne Neuroscience Institute, University of Melbourne
Canada
Treva Glazebrook Lecture Series at University of Western Ontario
United States
CSHL Adult Lectures at Cold Spring Harbor Laboratory
Contemporary Neurology and Neuroscience Evening Lecture Series at Department of Neurology, University of California, Irvine
Cognitive Neuroscience Lecture Series at Center for Neural Science|the Center for Neural Science, New York University
Duke Neurobiology Lecture Series at Department of Neurobiology, Duke University Medical Center
Frontiers in Neuroscience Lecture Series by the Graduate Program in Neuroscience, Emory University
Gardner Murphy Memorial Lecture Series by the American Society for Psychical Research
HHMI's Neuroscience Lecture Series at Howard Hughes Medical Institute
IHF Distinguished Lecture Series on Brain, Learning and Memory at Irvine Health Foundation
Interdisciplinary Seminar at Oklahoma Center for Neuroscience (OCNS)
Distinguished Lecturer Series at UC Davis M.I.N.D. Institute
Mind, Brain and Behavior Distinguished Lecture Series at Center for Cognitive Neurosciences, Duke University
Mind/Brain Lecture Series at Swartz Foundation
MNIF Public Lecture Series at Montana Neuroscience Institute Foundation
Neuroscience Lecture Series at University of Wisconsin–Madison
NRC Lecture Series at Neuroscience Research Center, University of Texas Health Science Center at Houston
NUIN Lecture Series at the Northwestern University Institute for Neuroscience
Pinkel Endowed Lecture Series at Institute for Research in Cognitive Science, University of Pennsylvania
Wyeth Distinguished Lecture Series in Behavioral Neuroscience at Rutgers University
Yerkes Neuroscience Lecture Series by Neuroscience Division, Yerkes National Primate Research Center
Physics
Canada
Perimeter Institute's Public Lecture Series at Perimeter Institute
United Kingdom
Sir Nevill Mott Lecture Series at Loughborough University
United States
Fermilab Lecture Series at Fermilab
Henry Norris Russell Lectureship before the American Astronomical Society
Jansky Lectureship before the National Radio Astronomy Observatory
Leon Pape Memorial Lecture Series at California State University at Los Angeles
Lyman Parratt Lecture Series at Cornell University
Theodore von Kármán Lecture Series at Jet Propulsion Laboratory (JPL)
Richtmyer Memorial Award Lectureship, by the American Association of Physics Teachers
Medical sciences
Italy
Giuseppe Bigi Memorial Lecture Series, Milan
Science
Spain
Program ConCiencia at the University of Santiago de Compostela
References
Public lecture series |
23988102 | https://en.wikipedia.org/wiki/Institute%20of%20Software%2C%20Chinese%20Academy%20of%20Sciences | Institute of Software, Chinese Academy of Sciences | Institute of Software, Chinese Academy of Sciences (IOS, or ISCAS, simplified Chinese: 中国科学院软件研究所; pinyin: Zhōngguó Kēxuéyuàn Ruǎnjiàn Yánjiūsuǒ) is one of institutes that Chinese Academy of Sciences (CAS) established. It was established on March 1, 1985. It's on the 4th Zhongguancun South Fourth Street, Haidian District, Beijing, China. There are branches in Wuxi, Chongqing, Harbin, Guangzhou, Qingdao and Guiyang. Its research areas are computer science theory and application, fundamental software technology and system structure, the Internet information processing theory, methods and technology, as well as the integrated information system technology. There are five departments in it, such as the General Division, Basic Research Division, Hi-Tech Research Division, Applied Research Division and Development Division. There are also state key laboratories and national engineering research centers. They are State Key Laboratory of Computer Science, National Engineering Research Center of Fundamental Software, Science and Technology on Integrated Information System Laboratory. Software academic journals hosted by ISCAS are Journal of Software, Journal of Chinese Information Processing and Computer Systems Applications.
See also
China Software Industry Association
Software industry in China
External links
http://english.is.cas.cn/
Research institutes of the Chinese Academy of Sciences
Software
1985 establishments in China
Education in Beijing |
235550 | https://en.wikipedia.org/wiki/Sequence%20analysis | Sequence analysis | In bioinformatics, sequence analysis is the process of subjecting a DNA, RNA or peptide sequence to any of a wide range of analytical methods to understand its features, function, structure, or evolution. Methodologies used include sequence alignment, searches against biological databases, and others.
Since the development of methods of high-throughput production of gene and protein sequences, the rate of addition of new sequences to the databases increased very rapidly. Such a collection of sequences does not, by itself, increase the scientist's understanding of the biology of organisms. However, comparing these new sequences to those with known functions is a key way of understanding the biology of an organism from which the new sequence comes. Thus, sequence analysis can be used to assign function to genes and proteins by the study of the similarities between the compared sequences. Nowadays, there are many tools and techniques that provide the sequence comparisons (sequence alignment) and analyze the alignment product to understand its biology.
Sequence analysis in molecular biology includes a very wide range of relevant topics:
The comparison of sequences in order to find similarity, often to infer if they are related (homologous)
Identification of intrinsic features of the sequence such as active sites, post translational modification sites, gene-structures, reading frames, distributions of introns and exons and regulatory elements
Identification of sequence differences and variations such as point mutations and single nucleotide polymorphism (SNP) in order to get the genetic marker.
Revealing the evolution and genetic diversity of sequences and organisms
Identification of molecular structure from sequence alone
In chemistry, sequence analysis comprises techniques used to determine the sequence of a polymer formed of several monomers (see Sequence analysis of synthetic polymers).
In molecular biology and genetics, the same process is called simply "sequencing".
In marketing, sequence analysis is often used in analytical customer relationship management applications, such as NPTB models (Next Product to Buy).
In social sciences and in sociology in particular, sequence methods are increasingly used to study life-course and career trajectories, time use, patterns of organizational and national development, conversation and interaction structure, and the problem of work/family synchrony. This body of research is described under sequence analysis in social sciences.
History
Since the very first sequences of the insulin protein were characterized by Fred Sanger in 1951, biologists have been trying to use this knowledge to understand the function of molecules. He and his colleagues' discoveries contributed to the successful sequencing of the first DNA-based genome. The method used in this study, which is called the “Sanger method” or Sanger sequencing, was a milestone in sequencing long strand molecules such as DNA. This method was eventually used in the human genome project. According to Michael Levitt, sequence analysis was born in the period from 1969–1977. In 1969 the analysis of sequences of transfer RNAs was used to infer residue interactions from correlated changes in the nucleotide sequences, giving rise to a model of the tRNA secondary structure. In 1970, Saul B. Needleman and Christian D. Wunsch published the first computer algorithm for aligning two sequences. Over this time, developments in obtaining nucleotide sequence improved greatly, leading to the publication of the first complete genome of a bacteriophage in 1977. Robert Holley and his team in Cornell University were believed to be the first to sequence an RNA molecule.
Sequence alignment
There are millions of protein and nucleotide sequences known. These sequences fall into many groups of related sequences known as protein families or gene families. Relationships between these sequences are usually discovered by aligning them together and assigning this alignment a score. There are two main types of sequence alignment. Pair-wise sequence alignment only compares two sequences at a time and multiple sequence alignment compares many sequences. Two important algorithms for aligning pairs of sequences are the Needleman-Wunsch algorithm and the Smith-Waterman algorithm. Popular tools for sequence alignment include:
Pair-wise alignment - BLAST, Dot plots
Multiple alignment - ClustalW, PROBCONS, MUSCLE, MAFFT, and T-Coffee.
A common use for pairwise sequence alignment is to take a sequence of interest and compare it to all known sequences in a database to identify homologous sequences. In general, the matches in the database are ordered to show the most closely related sequences first, followed by sequences with diminishing similarity. These matches are usually reported with a measure of statistical significance such as an Expectation value.
Profile comparison
In 1987, Michael Gribskov, Andrew McLachlan, and David Eisenberg introduced the method of profile comparison for identifying distant similarities between proteins. Rather than using a single sequence, profile methods use a multiple sequence alignment to encode a profile which contains information about the conservation level of each residue. These profiles can then be used to search collections of sequences to find sequences that are related. Profiles are also known as Position Specific Scoring Matrices (PSSMs). In 1993, a probabilistic interpretation of profiles was introduced by Anders Krogh and colleagues using hidden Markov models. These models have become known as profile-HMMs.
In recent years, methods have been developed that allow the comparison of profiles directly to each other. These are known as profile-profile comparison methods.
Sequence assembly
Sequence assembly refers to the reconstruction of a DNA sequence by aligning and merging small DNA fragments. It is an integral part of modern DNA sequencing. Since presently-available DNA sequencing technologies are ill-suited for reading long sequences, large pieces of DNA (such as genomes) are often sequenced by (1) cutting the DNA into small pieces, (2) reading the small fragments, and (3) reconstituting the original DNA by merging the information on various fragments.
Recently, sequencing multiple species at one time is one of the top research objectives. Metagenomics is the study of microbial communities directly obtained from the environment. Different from cultured microorganisms from the lab, the wild sample usually contains dozens, sometimes even thousands of types of microorganisms from their original habitats. Recovering the original genomes can prove to be very challenging.
Gene prediction
Gene prediction or gene finding refers to the process of identifying the regions of genomic DNA that encode genes. This includes protein-coding genes as well as RNA genes, but may also include the prediction of other functional elements such as regulatory regions. Geri is one of the first and most important steps in understanding the genome of a species once it has been sequenced. In general, the prediction of bacterial genes is significantly simpler and more accurate than the prediction of genes in eukaryotic species that usually have complex intron/exon patterns. Identifying genes in long sequences remains a problem, especially when the number of genes is unknown. Hidden markov models can be part of the solution. Machine learning has played a significant role in predicting the sequence of transcription factors. Traditional sequencing analysis focused on the statistical parameters of the nucleotide sequence itself (The most common programs used are listed in Table 4.1). Another method is to identify homologous sequences based on other known gene sequences (Tools see Table 4.3). The two methods described here are focused on the sequence. However, the shape feature of these molecules such as DNA and protein have also been studied and proposed to have an equivalent, if not higher, influence on the behaviors of these molecules.
Protein structure prediction
The 3D structures of molecules are of major importance to their functions in nature. Since structural prediction of large molecules at an atomic level is a largely intractable problem, some biologists introduced ways to predict 3D structure at a primary sequence level. This includes the biochemical or statistical analysis of amino acid residues in local regions and structural the inference from homologs (or other potentially related proteins) with known 3D structures.
There have been a large number of diverse approaches to solve the structure prediction problem. In order to determine which methods were most effective, a structure prediction competition was founded called CASP (Critical Assessment of Structure Prediction).
Methodology
The tasks that lie in the space of sequence analysis are often non-trivial to resolve and require the use of relatively complex approaches. Of the many types of methods used in practice, the most popular include:
DNA patterns
Dynamic programming
Artificial Neural Network
Hidden Markov Model
Support Vector Machine
Clustering
Bayesian Network
Regression Analysis
Sequence mining
Alignment-free sequence analysis
See also
Fourier transform
Least-squares spectral analysis
List of sequence alignment software
List of alignment visualization software
List of phylogenetics software
List of phylogenetic tree visualization software
List of protein structure prediction software
List of RNA structure prediction software
Sequence analysis in social sciences
References
Bioinformatics |
28991357 | https://en.wikipedia.org/wiki/LinuxCNC | LinuxCNC | LinuxCNC (formerly Enhanced Machine Controller or EMC2) is a free, open-source Linux software system that implements numerical control capability using general purpose computers to control CNC machines. Designed by various volunteer developers at linuxcnc.org, it is typically bundled as an ISO file with a modified version of 32-bit Ubuntu Linux which provides the required real-time kernel.
Due to the tight real-time operating system integration, a standard Ubuntu Linux desktop PC without the real-time kernel will only run the package in demo mode.
Purpose
LinuxCNC is a software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots and hexapods. It can control up to 9 axes or joints of a CNC machine using G-code (RS-274NGC) as input. It has several GUIs suited to specific kinds of usage (touch screen, interactive development).
Currently it is almost exclusively used on x86 PC platforms, but has been ported to other architectures. It makes extensive use of a real time-modified kernel, and supports both stepper- and servo-type drives.
It does not provide drawing (CAD - Computer Aided Design) or G-code generation from the drawing (CAM - Computer Automated Manufacturing) functions.
History
The EMC Public Domain software system was originally developed by NIST, as the next step beyond the National Center for Manufacturing Sciences / Air Force sponsored Next Generation Controller Program[NGC 1989] /Specification for an Open Systems Architecture[SOSAS]. It was called the EMC [Enhanced Machine Controller Architecture 1993]. Government sponsored Public Domain software systems for the control of milling machines were among the first projects developed with the digital computer in the 1950s. It was to be a "vendor-neutral" reference implementation of the industry standard language for numerical control of machining operations, RS-274D (G-code).
The software included the RS274 interpreter driving the motion trajectory planner, real-time motor/actuator drivers and a user interface. It demonstrated the feasibility of an advanced numerical control system using off the shelf PC hardware running FreeBSD or Linux, interfacing to various hardware motion control systems. Additional development continues using current and additional architectures (e.g. ARM architecture devices).
The demonstration project was very successful and created a community of users and volunteer contributors. Around June 2000, NIST relocated the source code to SourceForge under the Public Domain license in order to allow external contributors to make changes. In 2003, the community rewrote some parts of it, reorganized and simplified other parts, then gave it the new name, EMC2. EMC2 is still being actively developed. Licensing is now under the GNU General Public License.
The adoption of the new name EMC2 was prompted by several major changes. Primarily, a new layer known as HAL (Hardware Abstraction layer) was introduced to interconnect functions easily without altering C code or recompiling. This split trajectory and motion planning from motion hardware, making it easier to generate control programs to support gantry machine, lathe threading and rigid tapping, SCARA robot arms and a variety of other adaptations. HAL comes with some interactive tools to examine signals and connect and remove links. It also includes a virtual oscilloscope to examine signals in real time. Another change with EMC2 is Classic Ladder, (an open-source ladder logic implementation) adapted for the real time environment to configure complex auxiliary devices like automatic tool changers.
Around 2011, the name was changed officially from EMC2 to LinuxCNC. This was done at the insistence of EMC Corporation and the agreement of the project leadership. Internally some refer to LinuxCNC by EMC or EMC2 as it was historically known. EMC Corporation proposed that the LinuxCNC project, as previously named, would be confusing for customers or potential customers with their (mainly) storage related products.
Platforms
Due to the need of fine grained, precise real-time control of machines, LinuxCNC requires a platform with real-time computing capabilities. Early versions of LinuxCNC (EMC) ran under a real-time version of Windows NT, but later version of Windows did not have good real-time support so Linux with real-time extensions became the preferred platform. Currently LinuxCNC uses the RTAI kernel or PREEMPT-RT with LinuxCNC's 'uspace' flavour of the RTAPI.
Installing LinuxCNC and the underlying real-time kernel patches on a base Linux system can be a daunting task. Paul Corner came to the rescue with the BDI (Brain Dead Install) which was a CD from which a complete working system (Linux, real-time patches, and LinuxCNC) could be installed. This made LinuxCNC accessible to a much larger user community. Today Paul's BDI has evolved into a bootable (live) ISO that can be burned to a CD or USB and run on most any PC style computer to test drive LinuxCNC without having to install the system. Bootable LinuxCNC ISOs are available for Debian wheezy (RTAI kernel) and Debian stretch (RT-PREEMPT kernel).
The policy for LinuxCNC is to build packages and offer support for Debian, but pre-built binary packages are also available for other Linux systems and architectures.
Design
LinuxCNC uses the model of 'sense, plan, act' in its interactions with hardware. For instance, it reads the current axis position, calculates a new target position/voltage, and then writes that to the hardware. There is no buffering of commands nor are externally initiated reads or writes allowed. This no-buffering approach gives the most freedom to adding or changing capabilities of LinuxCNC. By using relatively "dumb" external hardware and programming the capabilities in the host computer, LinuxCNC is not locked to any one piece of hardware. It also allows an interested user to easily change behaviour/capabilities/hardware.
This model tends to lend itself to specific types of external interfaces---PCI, PCIE, Parallel port (in SPP or EPP mode), ISA, and Ethernet have been used for motor control. USB and RS232 serial are not good candidates; USB having bad realtime capabilities and RS232 being too slow for motor control.
LinuxCNC has basic "realtime" requirements because of this model. The interval between reading and writing must be consistent and reasonably fast. A typical machine does realtime calculations in a 1 millisecond repeating thread. The reading and writing to hardware must be a small part of this time, e.g. 200 microseconds, otherwise the phase shift makes tuning more difficult and there is less time available for the non-realtime programs, which may make the screen controls less responsive.
LinuxCNC "employs a trapezoidal velocity profile generator."
Configuration
LinuxCNC uses a software layer called HAL (Hardware Abstraction Layer).
HAL allows a multitude of configurations to be built while being flexible: one can mix & match various hardware control boards, output control signals through the parallel port or serial port - while driving stepper or servo motors, solenoids and other actuators.
LinuxCNC also includes a software programmable logic controller (PLC) which is usually used in extensive configurations (such as complex machining centres). The software PLC is based on the open source project Classicladder, and runs within the real-time environment.
See also
Machinekit, an open source project to port and extend EMC2/LinuxCNC to run efficiently on the BeagleBone and related hardware.
References
Notes
Bibliography
Proctor, F. M., and Michaloski, J., "Enhanced Machine Controller Architecture Overview," NIST Internal Report 5331, December 1993. Available online at ftp://129.6.13.104/pub/NISTIR_5331.pdf
Lumia, "The Enhanced Machine Controller Architecture", 5th International Symposium on Robotics and Manufacturing, Maui, HI, August 14–18, 1994, https://www.nist.gov/customcf/get_pdf.cfm?pub_id=820483
Fred Proctor et al., "Simulation and Implementation of an Open Architecture Controller", Simulation, and Control Technologies for Manufacturing, Volume 2596, Proceedings of the SPIE, October 1995, https://web.archive.org/web/20100527174141/http://www.isd.mel.nist.gov/documents/proctor/sim/sim.html
Fred Proctor, John Michaloski, Will Shackleford, and Sandor Szabo, "Validation of Standard Interfaces for Machine Control", Intelligent Automation and Soft Computing: Trends in Research, Development, and Applications, Volume 2, TSI Press, Albuquerque, NM, 1996, https://web.archive.org/web/20100527165142/http://www.isd.mel.nist.gov/documents/proctor/isram96/isram96.html
Shackleford and Proctor, "Use of open source distribution for a Machine tool Controller", Sensors and controls for intelligent manufacturing. Conference, Boston MA, 2001, vol. 4191, pp. 19–30, https://web.archive.org/web/20100820224129/http://www.isd.mel.nist.gov/documents/shackleford/4191_05.pdf or
Morar et al., "ON THE POSSIBILITY OF IMPROVING THE WIND GENERATORS", International Conference on Economic Engineering and Manufacturing Systems, Brasov, 25–26 October 2007, https://web.archive.org/web/20120313054238/http://www.recentonline.ro/021/Morar_L_01a.pdf
Zhang et al., "Development of EMC2 CNC Based on Qt", Manufacturing Technology & Machine Tool, 2008, http://en.cnki.com.cn/Article_en/CJFDTOTAL-ZJYC200802046.htm
Leto et al., "CAD/CAM INTEGRATION FOR NURBS PATH INTERPOLATION ON PC BASED REAL-TIME NUMERICAL CONTROL", 8th INTERNATIONAL CONFERENCE ON ADVANCED MANUFACTURING SYSTEMS AND TECHNOLOGY JUNE 12–13, 2008 UNIVERSITY OF UDINE - ITALY, https://web.archive.org/web/20110703113248/http://158.110.28.100/amst08/papers/art837759.pdf
Xu et al., "Mechanism and Application of HAL in the EMC2", Modern Manufacturing Technology and Equipment 2009–05, http://en.cnki.com.cn/Article_en/CJFDTOTAL-SDJI200905037.htm
Zivanovic et al., "Methodology for Configuring Desktop 3-axis Parallel Kinematic Machine", FME Transactions (2009) 37, 107–115,
Staroveski et al., "IMPLEMENTATION OF A LINUX-BASED CNC OPEN CONTROL SYSTEM", 12th INTERNATIONAL SCIENTIFIC CONFERENCE ON PRODUCTION ENGINEERING –CIM2009, Croatian Association of Production Engineering, Zagreb 2009,
Li et al., "Control system design and simulation of parallel kinematic machine based on EMC2", Machinery Design & Manufacture 2010–08, http://en.cnki.com.cn/Article_en/CJFDTOTAL-JSYZ201008074.htm
Klancnik et al., "Computer-Based Workpiece Detection on CNC Milling Machine Tools Using Optical Camera and Neural Networks", Advances in Production Engineering & Management 5 (2010) 1, 59–68,
External links
LinuxCNC project wiki
The NIST RS274NGC Standard - Version 3 Aug 2000 also available as a PDF
The Enhanced Machine Controller homepage at NIST
Computer-aided engineering
Free software programmed in C
CNC |
4043451 | https://en.wikipedia.org/wiki/Db2%20Database | Db2 Database | Db2 Database formerly known as Db2 for Linux, UNIX and Windows is a database server product developed by IBM. Also known as Db2 LUW for brevity, it is part of the Db2 family of database products. Db2 LUW is the "Common Server" product member of the Db2 family, designed to run on most popular operating systems. By contrast, all other Db2 products are specific to a single platform.
Db2 11.5 has native language support for Python, Ruby, Go, Java, PHP, Node.js and Sequelize. It is part of the Hybrid Data Management Platform offering, which intends to enable structured, semi-structured, or unstructured data to be accessed and analyzed, whether it is stored on premises or on computers elsewhere (cloud).
History
The first release of Advanced Db2 LUW was as Db2 Universal Database version 5, available on UNIX, Windows and OS/2 platforms. This product stemmed from two earlier products, Db2 Common Server version 2 and Db2 Parallel Edition. Db2 Universal Database version 5 continued IBM's new direction of using a common code base to support Db2 on different platforms, while incorporating the shared nothing features of Db2 Parallel Edition to support large data warehousing databases.
Db2 LUW was initially called Db2 Universal Database (UDB), but over time IBM marketing started to use the same term for other database products, notably mainframe (z-Series) Db2. Thus, the Db2 for Linux, UNIX and Windows moniker became necessary to distinguish the common server Db2 LUW product from single-platform Db2 products.
The current Db2 LUW product runs on multiple Linux and UNIX distributions, such as Red Hat Enterprise Linux, SUSE Linux, IBM AIX, HP-UX, and Solaris, and most Windows systems. Earlier versions also ran on OS/2. Multiple editions are marketed for different sizes of organisation and uses. The same code base is also marketed without the Db2 name as IBM InfoSphere Warehouse edition.
In 2017, the "Db2 UDB" name became just "Db2".
Key features
In addition to standard ACID-compliant row-organized relational database functionality, some of its key features are:
IBM BLU Acceleration: OLAP oriented column-organized tables, compressed with order-preserving "approximate Huffman encoding", exploiting SIMD vector processing of compressed data. Because the compression is order preserving, a greater range of operations can be performed on compressed data.
pureScale: A data-sharing clustering of the database over multiple servers for scalability and resilience. This technology was taken from the mainframe (z-Series) Db2 product. This form of clustering suits OLTP workloads.
Database partitioning feature: A shared-nothing approach to clustering, with data hashed across multiple partitions on the same server or different processors. With the right database design, this approach allows near-linear scaling. This form of clustering is generally employed for large data warehouses rather than OLTP workloads.
XML support: XML-specific storage and indexing, accessible by both SQL and also XQuery.
NoSQL support: Currently graph triple stores and JSON support
Storage Optimization
Data Federation
Federation Server
Continuous Data Ingest
Editions
IBM offers three editions: Db2 Community Edition, Standard Server Edition, and Advanced Server Edition.
IBM Db2 Community Edition
IBM Db2 Community Edition is a free to download, use and redistribute edition of the IBM Db2 data server, which has both XML database and relational database management system features. It is limited to four CPU cores, 16 GB of RAM and no Enterprise support and fix packs. Db2 Community Edition has no limit on number of users or database size.
On June 27, 2019, IBM released Db2 V11.5, a Db2 update designed to deliver enhancements to help automate data management, eliminate ETL, and support artificial intelligence data workloads. Along with the update, IBM unveiled streamlined offerings. The free version of Db2 is the Community Edition. This version of Db2 contains all features, does not include an expiration. The caps on this version of Db2 is four CPU cores and 16 GB of RAM. IBM Db2 Community Edition replaces the Db2 Express edition.
History
On June 27, 2019, IBM announced a special free version of Db2 Database called Db2 Community edition. The Db2 Community edition was created for the 11.5 release of IBM Db2. The Db2 Community edition replaced the previously free version of IBM Db2 known as DB2 Express-C.
On January 30, 2006, IBM announced a special free version of DB2 Express edition called DB2 Express-C. The DB2 Express-C edition was created for the 8.2 release of IBM Db2. After this Db2 Express-C was created for all new DB2 versions: 9.1 (codenamed "Viper"), 9.5 (codename "Viper 2"), 9.7 (codename "Cobra"), 10.01 (codename "Galileo"), 10.5 (codename "Kepler") and 11.1.
The IBM DB2 pureXML implementation of XML database features was introduced in the beta of DB2 9.
Supported environments
The Community edition download is available for the following platforms: IBM Db2 11.5 Edition for AIX, IBM Db2 11.5 Edition for Windows on AMD64 and Intel EM64T systems (x64), IBM Db2 11.5 for Linux on AMD64 and Intel EM64T systems (x64), IBM Db2 11.5 for Linux on POWER little endian systems. There is also a Docker Image download available for the Community edition.
Limitations
IBM Db2 Community edition is limited to use up to 16 GB RAM and four CPU cores. As of version 11.5.7, there was no limit on the database size. Some previous version 11.5 point releases imposed a limit of 100 GB on the database size. The database engine does not limit the number of concurrent user connections. A prior free Db2 version, the IBM DB2 Express-C, supported up to 16 GB RAM and two CPU cores.
The Db2 Community edition feature set is similar to Db2 Standard and Advanced editions. The main difference is that the Community edition has lower CPU and memory limits, and is unsupported. It has the following extra features enabled:
Backup compression
Homogeneous federation only DB2, Informix Data Server and Oracle targets are supported
Homogeneous SQL replication
Net Search Extender
XML storage
Spatial extender
Updates
Db2 Community edition is unsupported and regular Db2 fix packs can not be applied to it. IBM does not release any fixes, but they do publish updated installation images and remove old ones. Unix versions need to be reinstalled, but it is possible to perform in-place updates on Windows versions by just running the installation program of a newer version. If you need access to regular Db2 fix packs, which are released several times per year, you need to buy Db2 Standard or Advanced editions. Installation images are traditionally refreshed once for every major Db2 release to sync code with second fix pack.
Subscription
For Db2 Community editions there are no annual subscriptions, instead the free trial is available indefinitely. Users who want to scale beyond four cores and 16 GB of RAM do not need to migrate their workload to an upgraded environment, instead users apply a license key against the existing implementation to access additional capacity.
IBM Db2 Standard Edition
The Db2 Standard Edition is available as a perpetual software license for production and non-production use for up to 16 processor cores and 128 GB RAM with IBM support. For production use, Db2 Standard Edition can be licensed based on a Virtual Processor Core metric, wherein it is licensed by the total count of processor cores in a non-partitioned physical server, or virtual cores assigned to a virtual server. For non-production use, Db2 Standard Edition can be licensed based on the total count of authorized users.
IBM Db2 Advanced Edition
The Db2 Advanced Edition is available only as a component of the IBM Hybrid Data Management Platform (HDMP). Within HDMP, Db2 is available both as a perpetual software license AND a monthly subscription for unrestricted production and non-production use with premium IBM support. For both HDMP perpetual license and subscription offerings, you need to buy FlexPoints. Flexpoints are generic licensing credits that can be used to deploy any Db2-family software product or cloud service offering.
Db2 Advanced Edition offers these benefits:
Improves application performance and analytics for faster decisions.
Delivers high availability and disaster recovery capabilities.
Provides a secure, flexible environment
Interfaces with a variety of data more efficiently.
Improves productivity and reduces administration efforts.
References
Database servers
IBM DB2 |
390263 | https://en.wikipedia.org/wiki/Jython | Jython | Jython is an implementation of the Python programming language designed to run on the Java platform. The implementation was formerly known as JPython until 1999.
Overview
Jython programs can import and use any Java class. Except for some standard modules, Jython programs use Java classes instead of Python modules. Jython includes almost all of the modules in the standard Python programming language distribution, lacking only some of the modules implemented originally in C. For example, a user interface in Jython could be written with Swing, AWT or SWT. Jython compiles Python source code to Java bytecode (an intermediate language) either on demand or statically.
History
Jython was initially created in late 1997 to replace C with Java for performance-intensive code accessed by Python programs, moving to SourceForge in October 2000. The Python Software Foundation awarded a grant in January 2005. Jython 2.5 was released in June 2009.
Status and roadmap
The most recent release is Jython 2.7.2. It was released on 21 March 2020 and is compatible with Python 2.7.
Although Jython implements the Python language specification, it has some differences and incompatibilities with CPython, which is the reference implementation of Python.
License terms
From version 2.2 on, Jython (including the standard library) is released under the Python Software Foundation License (v2). Older versions are covered by the Jython 2.0, 2.1 license and the JPython 1.1.x Software License.
The command line interpreter is available under the Apache Software License.
Usage
JBoss Application Server's command line interface scripting using Jython
Oracle Weblogic Server Scripting Tool uses Jython
IBM Rational development tools allow Jython scripting
IBM WebSphere Application Server tool scripting with wsadmin allows using Jython and Jacl
ZK – a Java Ajax framework that allows glue logic written in Jython
Ignition - A software development platform focused on HMI and SCADA
Ghidra - a reverse engineering tool developed by the NSA allows plugins to be written in Java or Jython
openHAB - home automation software
See also
List of Java scripting languages
IronPython – an implementation of Python for .NET and Mono
PyPy – a self-hosting interpreter for the Python programming language.
JRuby – similar project for the Ruby programming language.
References
External links
JVM programming languages
Object-oriented programming languages
Python (programming language) implementations
Scripting languages
Software using the PSF license |
639806 | https://en.wikipedia.org/wiki/Computer%20Modern | Computer Modern | Computer Modern is the original family of typefaces used by the typesetting program TeX. It was created by Donald Knuth with his Metafont program, and was most recently updated in 1992. Computer Modern, or variants of it, remains very widely used in scientific publishing, especially in disciplines that make frequent use of mathematical notation.
Design
Computer Modern is a 'Didone', or modern serif font, a genre that emerged in the late 18th century as a contrast to the more organic designs that preceded them. Didone fonts have high contrast between thick and thin elements, and their axis of "stress" or thickening is perfectly vertical. Computer Modern was specifically based on the 10 point size of the American Lanston Monotype Company's Modern Extended 8A, part of a family Monotype originally released in 1896. This was one of many modern faces issued by typefounders and Monotype around this period, and the standard style for body text printing in the late nineteenth century.
In creating the TeX publishing system, Knuth was influenced by the history of mathematics and a desire to achieve the "classic style" of books printed in metal type. Modern faces were used extensively for printing mathematics, especially before Times New Roman became popular for mathematics printing from the 1950s.
The most unusual characteristic of Computer Modern, however, is the fact that it is a complete type family designed with Knuth's Metafont system, one of the few typefaces developed in this way. The Computer Modern source files are governed by 62 distinct parameters, controlling the widths and heights of various elements, the presence of serifs or old-style numerals, whether dots such as the dot on the "i" are square or rounded, and the degree of "superness" in the bowls of lowercase letters such as "g" and "o". This allows Metafont designs to be processed in unusual ways; Knuth has shown effects such as morphing in demonstrations, where one font slowly transitions into another over the course of a text. While it attracted attention for the concept, Metafont has been used by few other font designers; by 1996 Knuth commented "asking an artist to become enough of a mathematician to understand how to write a font with 60 parameters is too much"<ref>CSTUG, Charles University, Prague, March 1996, Questions and Answers with Prof. Donald E. Knuth, reproduced in TUGboat 17 (4) (1996), 355–67. Citation is from page 361. Available online at http://www.tug.org/TUGboat/Articles/tb17-4/tb53knuc.pdf</ref> while digital-period font designer Jonathan Hoefler commented in 2015 that "Knuth's idea that letters start with skeletal forms is flawed".
Derived versions
Knuth produced his original Computer Modern fonts using Metafont, a program that reads stroke-based definitions of glyphs and outputs ready-to-use fonts as bitmap image files. He mostly left the font, as with other components of TeX, in the public domain, but made one request: that any derivative work based on Knuth's software not carry the same name, a request Knuth made to assure quality control. This stipulation is similar to the one found in the SIL Open Font License, and later derivatives of Computer Modern have been released under that license.
The advance of publishing technology (PostScript, PDF, laser printers) has reduced the need for bitmap fonts. The preferred formats are now outline fonts such as Type 1, TrueType, or OpenType, which can be rendered efficiently at arbitrary resolution and using sophisticated anti-aliasing techniques by printer firmware or on-screen document viewers. Therefore, several other projects have ported the Computer Modern fonts into such formats. Some of these projects have also complemented Computer Modern with
additional characters (euro, accented characters, Cyrillic and Greek script coverage)
different font encodings (to overcome problems with Knuth's original 8-bit character sets)
additional font style variants
Several such derivatives are now also widely used and included in TeX Live, a modern TeX distribution.
A current extended release of the Computer Modern family in the general-purpose OpenType format is the CMU distribution (for Computer Modern Unicode):
CMU Serif, the main Computer Modern font family. This includes the four traditional styles of font (regular, italic, bold, bold italic), and also:
CMU Serif upright italic, an upright italic style similar to cursive upright handwriting
CMU Serif bold non-extended, a bold weight duplexed to have the same width as the regular style
CMU Serif roman and bold slanted, two oblique styles
CMU Classical Serif, an italic design with slightly simpler serif designs
Concrete Roman, a slab serif font in the four standard styles
CMU Typewriter, a typewriter-style slab serif font
CMU Sans Serif, a complementary sans-serif font, and CMU Bright, a lighter style of the same design
CMU Sans demi-condensed, a condensed style of the same design
BlueSky
Computer Modern was first transformed to a PostScript Type 3 font format by BlueSky, Inc. in 1988, and then to Type 1 in 1992 to include font hinting. The Type 1 version has since then been donated to the American Mathematical Society (AMS) which distributes them freely under the Open Font License. It is found in most standard TeX distributions.
Latin Modern
The Latin Modern implementation, maintained by Bogusław Jackowski and Janusz M. Nowacki of TeX User Group Poland, is now standard in the TeX community and was made through a Metafont/MetaPost derivative called METATYPE1. It was derived from the BlueSky Type 1 fonts, which were converted back into outline-based METATYPE1 programs, from which then the extended Type1 and OpenType Latin Modern fonts were developed. ConTeXt uses Latin Modern as default font, instead of Computer Modern.
The Type1 to METATYPE1 to Type 1 roundtrip conversion process involved in the production of the Latin Modern fonts did try to preserve the hinting information of the BlueSky fonts; however it added rounding errors that do affect the quality of the
hinting at low pixel sizes. As a result, on-screen display of the Latin Modern fonts can result in a less even display of kerning and character heights than is the case with the BlueSky fonts.
The same process was later extended to some free PostScript font clones under the umbrella project called TeX Gyre.
The Latin Modern font has also gained an OpenType math table.
New Computer Modern
The New Computer Modern font family is a huge extension (“huge” in terms of the number of additional glyphs) of the Latin Modern fonts which adds support for several more languages such as Greek, Cyrillic, Hebrew, Cherokee and Coptic. This font family comes in two weights, “Regular” and “Book”. The book weight is supposed to look slightly heavier compared to the “Regular”. Both the weights include support for typesetting mathematics.
MLModern
MLModern is based on the Latin Modern font. It avoids the spindliness of most other Type 1 versions of Computer Modern and hence looks thicker in comparison to Latin Modern or Computer Modern.
Others
EC fonts – look much like Computer Modern, but have slightly different metrics. These were the first TeX fonts to use the “Cork encoding” (in LaTeX also known as T1 encoding) that provides precomposed glyphs for West-European languages. The original EC fonts were only available as Metafont generated bitmaps.
TC fonts – the TeX Companion fonts provide a number of additional symbols commonly used in text.
BaKoMa fonts – another automatically generated Type1 version of Computer Modern by Basil K. Malyshev, dating to 1994. The fonts remain available for download after Malyshev's 2019 death.
CM-super – a very large extension of Computer Modern, available in a variety of encodings. These fonts were automatically vectorized from Computer Modern or EC font bitmaps and therefore lack the hinting information in the BlueSky fonts.
CM-LGC – a Latin, Greek, Cyrillic extension
GUST – adding many diacritics, and Vietnamese
See also
STIX Fonts, a project to create Times New Roman-compatible mathematics fonts. Open-sourced under the SIL open font license.
References
Further reading
Donald E. Knuth, Computers and Typesetting Volume E: The Computer Modern Fonts'', Addison-Wesley, Reading, Mass. 1986 Hardcover: , Softcover:
External links
Computer Modern Unicode fonts home page
Newest 0.7 unicode version from Sourceforge - .ttf file compressed as .ttf.tar.gz
Old 0.6 release - Computer Modern (CMU) release, for general use (select otf)
Original Computer Modern fonts
Unified serif and sans-serif typeface families
Modern serif typefaces
TeX
Open-source typefaces
Free software Unicode typefaces
Mathematical typefaces
Mathematical OpenType typefaces
Typefaces designed by Donald Knuth
Typefaces with text figures
Typefaces and fonts introduced in 1978 |
1520386 | https://en.wikipedia.org/wiki/Generic%20Security%20Services%20Application%20Program%20Interface | Generic Security Services Application Program Interface | The Generic Security Service Application Program Interface (GSSAPI, also GSS-API) is an application programming interface for programs to access security services.
The GSSAPI is an IETF standard that addresses the problem of many similar but incompatible security services in use today.
Operation
The GSSAPI, by itself, does not provide any security. Instead, security-service vendors provide GSSAPI implementations - usually in the form of libraries installed with their security software. These libraries present a GSSAPI-compatible interface to application writers who can write their application to use only the vendor-independent GSSAPI.
If the security implementation ever needs replacing, the application need not be rewritten.
The definitive feature of GSSAPI applications is the exchange of opaque messages (tokens) which hide the implementation detail from the higher-level application.
The client and server sides of the application are written to convey the tokens given to them by
their respective GSSAPI implementations.
GSSAPI tokens can usually travel over an insecure network as the mechanisms provide inherent message security.
After the exchange of some number of tokens, the GSSAPI implementations at both ends inform their local application that a security context is established.
Once a security context is established, sensitive application messages can be wrapped (encrypted) by the GSSAPI for secure communication between client and server.
Typical protections guaranteed by GSSAPI wrapping include confidentiality (secrecy) and integrity (authenticity). The GSSAPI can also provide local guarantees about the identity of the remote user or remote host.
The GSSAPI describes about 45 procedure calls. Significant ones include:
GSS_Acquire_cred Obtains the user's identity proof, often a secret cryptographic key
GSS_Import_name Converts a username or hostname into a form that identifies a security entity
GSS_Init_sec_context Generates a client token to send to the server, usually a challenge
GSS_Accept_sec_context Processes a token from GSS_Init_sec_context and can generate a response token to return
GSS_Wrap Converts application data into a secure message token (typically encrypted)
GSS_Unwrap Converts a secure message token back into application data
The GSSAPI is standardized for the C (RFC 2744) language. Java implements the GSSAPI
as JGSS,
the Java Generic Security Services Application Program Interface.
Some limitations of GSSAPI are:
standardizing only authentication, rather not authorization too;
assuming a client–server architecture.
Anticipating new security mechanisms, the GSSAPI includes a negotiating pseudo mechanism, SPNEGO, that can discover and use new mechanisms not present when the original application was built.
Relationship to Kerberos
The dominant GSSAPI mechanism implementation in use is Kerberos.
Unlike the GSSAPI, the Kerberos API has not been standardized
and various existing implementations use incompatible APIs.
The GSSAPI allows Kerberos implementations to be API compatible.
Related technologies
RADIUS
SASL
TLS
SSPI
SPNEGO
RPCSEC GSS
Key concepts
Name A binary string that labels a security principal (i.e., user or service program) - see access control and identity. For example, Kerberos uses names like user@REALM for users and service/hostname@REALM for programs.
Credentials Information that proves an identity; used by an entity to act as the named principal. Credentials typically involve a secret cryptographic key.
Context The state of one end of the authenticating/authenticated protocol. May provide message protection services, which can be used to compose a secure channel.
Tokens Opaque messages exchanged either as part of the initial authentication protocol (context-level tokens), or as part of a protected communication (per-message tokens)
Mechanism An underlying GSSAPI implementation that provides actual names, tokens and credentials. Known mechanisms include Kerberos, NTLM, Distributed Computing Environment (DCE), SESAME, SPKM, LIPKEY.
Initiator/acceptor The peer that sends the first token is the initiator; the other is the acceptor. Generally, the client program is the initiator while the server is the acceptor.
History
July 1991: IETF Common Authentication Technology (CAT) Working Group meets in Atlanta, led by John Linn
September 1993: GSSAPI version 1 (RFC 1508, RFC 1509)
May 1995: Windows NT 3.51 released, includes SSPI
June 1996: Kerberos mechanism for GSSAPI (RFC 1964)
January 1997: GSSAPI version 2 (RFC 2078)
October 1997: SASL published, includes GSSAPI mechanism (RFC 2222)
January 2000: GSSAPI version 2 update 1 (RFC 2743, RFC 2744)
August 2004: KITTEN working group meets to continue CAT activities
May 2006: Secure Shell use of GSSAPI standardised (RFC 4462)
See also
PKCS #11
References
External links
The Generic Security Service API Version 2 update 1
The Generic Security Service API Version 2: C-Bindings
The Kerberos 5 GSS-API mechanism
The Kerberos 5 GSS-API mechanism: Version 2
The Simple and Protected GSS-API Negotiation Mechanism (SPNEGO)
The Simple Public-Key GSS-API Mechanism (SPKM)
LIPKEY - A Low Infrastructure Public Key Mechanism Using SPKM
Operating system security
Internet Standards |
8562286 | https://en.wikipedia.org/wiki/VMware%20Fusion | VMware Fusion | VMware Fusion is a software hypervisor developed by VMware for Macintosh computers. VMware Fusion allows Intel-based Macs to run virtual machines with guest operating systems—such as Microsoft Windows, Linux, NetWare, Solaris, or macOS—within the host macOS operating system.
Overview
VMware Fusion, which uses a combination of paravirtualization and hardware virtualization made possible by the Mac transition to Intel processors in 2006, marked VMware's first entry into Macintosh-based x86 virtualization. VMware Fusion uses Intel VT present in the Intel Core microarchitecture platform. Much of the underlying technology in VMware Fusion is inherited from other VMware products, such as VMware Workstation, allowing VMware Fusion to offer features such as 64-bit and SMP support.
VMware Fusion 1.0 was released on August 6, 2007, exactly one year after being announced.
VMware Fusion can run any of hundreds of operating systems provided by the user, including many older versions of macOS, which gives users a way to run older Mac application software that can no longer be run under the current version of macOS, such as 32-bit apps and Rosetta (PowerPC) apps.
System requirements
Most Apple Macs launched in 2012 or later for VMware Fusion 12, most Macs launched in 2011 or later for VMware Fusion 11, any x86-64 capable Intel Mac for VMware Fusion 8
4 GB of RAM (minimum)
750 MB free disk space
5 GB free disk space for each virtual machine (10 GB or more recommended)
macOS Catalina or later for VMware Fusion 12, Mac OS X 10.11 El Capitan or later for VMware Fusion 11, Mac OS X 10.9 Mavericks or later for VMware Fusion 8
Operating system installation media for virtual machines
Optional: nVidia GeForce 8600M, ATI Radeon HD 2600 or better graphics for Windows Aero support
See also
Desktop virtualization
Hardware virtualization
VMware Workstation
VMware Workstation Player
References
Further reading
External links
Fusion
Virtualization software
MacOS software |
7073360 | https://en.wikipedia.org/wiki/Camp%20King | Camp King | Camp King is a site on the outskirts of Oberursel, Taunus (in Germany), with a long history. It began as a school for agriculture under the auspices of the University of Frankfurt. During World War II, the lower fields became an interrogation center for the German Air Force. After World War II, the United States Army also used it as an interrogation center and intelligence post. In 1968, it became the command and control center for the United States Army Movements Control Agency - Europe (USAMCAEUR). Today it has been rebuilt as a German housing area.
History
Prior to World War II (1926–1939)
Prior to World War II, what later became known as Auswertstelle West during World War II, was an educational farm established, in 1936, under the auspices of the University of Frankfurt. Students learned gardening, bee keeping, animal husbandry as well as general farming techniques. It was in essence an agricultural learning center.
World War II
During World War II, the land below the school was adapted to military use as Auswertstelle West also usually erroneously called Dulag Luft. The discrepancy arises due to the post initially being both the Dulag and the interrogation center. Dulag Luft, initially on the post but later transferred to Frankfurt and later Wetzlar.
Activities at Auswertstelle West were intelligence-related. Captured allied air crews were brought to the post for interrogation. Once the interrogations were completed, they were transferred to their Stalag. The center housed many types of intelligence to include unit histories on most allied air forces.
During this time the post also picked up its nickname "The Goat Farm". As mentioned earlier, the lands acquisitioned for military use were the lands below the school, which were agricultural. In one of the fields was home to a nasty goat that was noted for chasing prisoners who attempted to invade its territory.
After the defeat of Nazi Germany, the British convened a war crimes trial due to the allegations of ill treatment of British Prisoners of War interrogated at this facility. The hearing, known as the "Dulag Luft Trial", was convened in Wuppertal, Germany, beginning on November 26, 1945. Four officers were charged: Killenger, Junge, Eberhardt, and Boehringer. Killenger and Junge were sentenced to five years confinement. Eberhardt received three years. Boehringer was acquitted.
Meanwhile, the facility itself was put by the victors to their own use (see following).
Post World War II (1945–1953)
As the war ended, the Americans stumbled across the post. Because the facilities were already designed for interrogations and intelligence gathering it was decided to continue using it for intelligence-gathering. Under U.S. control, the post was originally, unofficially, known as Camp Sibert (after General Edwin Sibert, the senior intelligence officer for the U.S. Zone), however it should not be confused with the domestic U.S. post of Camp Sibert in Alabama. Department of Defense records indicate that several Mobile Field Interrogation Units moved into the post to serve at the army and army group levels. On September 19, 1946, (General Order 264) named the intelligence center "Camp King", after Colonel Charles B. King, an intelligence officer who died on June 22, 1944, while accompanying a patrol bringing back prisoners.
Officially European Command Intelligence Center, Oberursel, it served as a United States interrogation center, engaged initially in denazification, and later for defectors from, and agents of, the Warsaw Pact. This included many intelligence sources as well as scientists.
The book The History of Camp King lists the following people:
Karl Brandt, Hitler’s personal surgeon and in charge of sanitation
Grand Admiral Karl Doenitz, Commander of the German navy,
Hans Frank, Reich Minister, Governor-General of occupied Poland
Reich Marshal Hermann Göring, Chief of the German Air Force,
Colonel General Alfred Jodl, Chief of Operations Staff of the German Armed Forces
Field Marshal Wilhelm Keitel, chief of the Oberkommando der Wehrmacht
Field Marshal Albert Kesselring, Supreme Commander West
Some civilians were held at the post, including German test pilot Hanna Reitsch and—at the request of the FBI, before her transfer to the United States and trial for treason—the German-American Mildred Elizabeth Sisk, one of the propagandists referred to as "Axis Sally".
In July 1946 General Reinhard Gehlen arrived on the post and established the Gehlen Organization which later went on to become the BND (Bundesnachrichtendienst, or "Federal Intelligence Service").
1953–1968
In 1953 Camp King was assigned to the 513th Military Intelligence Brigade. The post was still used as an interrogation center but also assumed intelligence duties as a command center for many field offices in Europe.
The post was a major intelligence center for the European Theater. The unit supported many field offices throughout Germany. The units power was usurped as the unit became so large that instead of command and control it actually served in more of a support role. Col Franz Ross rectified this and the unit resumed its actual function.
In the fall of 1968, the 513th Military Intelligence Brigade merged with the 66th Military Intelligence Group and relocated to the McGraw Kaserne in Munich, Germany.
1968 to 1993
In 1968 the United States Army Movements Control Center - Europe (USAMCAEUR) was assigned to Camp King. The organization was reflagged on 1 April 1975 as the 4th Transportation Brigade (redesignated 4th Transportation Command on 16 April 1981), reactivating the colors of a unit that had been in Vietnam and inactivated on 28 June 1972 at Fort Lewis, WA, after its return. Its mission, as stated in military records, was to operate integrated transportation service in support of US forces in Central Europe.
The responsibilities encompassed:
Operation of a military highway transportation system primarily known as the 37th Transportation Group (Trucks and Containers)
Operation of a military water terminals, notably in Bremerhaven, Germany, and Rotterdam, Netherlands (container ports).
Reception, processing, and on-carriage transportation of military units deployed in Europe
Movement and control of personnel and material.
Traffic management for US forces in Central Europe.
Preparation of USAREUR wartime movement program.
Intra-theater transport, employing both US Air Force and US Army aircraft.
Traffic regulation services for US forces in Central Europe.
The unit was inactivated in 1991 during the post-Cold War drawdown and its mission assigned to the 1st Transportation Movement Control Agency, which was formed from the command and control section of the former 4th TRANSCOM. In the spring of 1990, Headquarters, 22d Signal Brigade was moved to Camp King.
1993 to present
In 1993 the post was deactivated and was returned to the German Government. Since that time it has been redeveloped into a housing area. In honor of the past, the people of Oberursel have named the area Camp King.
There is a small monument in the housing area to the history of the area as a military base.
References
Bibliography
Gehlen, Reinhard. The Memoirs of General Reinhard Gehlen. New York, New York: World Publishing,1972
Deckname Artischocke — Egmont R. Koch and Michael Wech, Egmont R. Koch Filmproduktion, Bremen, Germany, 12 August 2002.
1st TMCA website
22nd Signal Brigade website
Other sources
Numerous Department of Defense documents received from The Historian Headquarter Europe
E-mail from John Finnegan, Historian Inscom.
E-Mails from Sandi Andresen
External links
The German website for the redevelopment
The Oberursel City website
The Oberursel City Camp King website
Federation of American Scientist Gehlen Organization
Link to Memories of Oberursel; Questions Questions Questions
Barracks of the United States Army in Germany
Ger |
3786829 | https://en.wikipedia.org/wiki/Wireless%20Nomad | Wireless Nomad | Wireless Nomad (wirelessnomad.com) was a for-profit cooperative based in Toronto, Canada providing subscriber-owned home and business Internet access along with free Wi-Fi wireless Internet access and music to over a hundred nodes, making it the largest free Wi-Fi network in the country at the time.
It was founded by Steve Wilton and Damien Fox in January 2005, and turned its DSL internet connections over to private ISP TekSavvy in March 2009.
All WiFi nodes were subsequently shut down.
Instead of using Bell Sympatico's or Rogers Cable's retail high-speed Internet access services to provide service to their wireless access points, they were their own ISP under Canadian Radio-television and Telecommunications Commission (CRTC) rules that compel large providers like Rogers and Bell to resell their local loop cable and DSL circuits to smaller ISPs at a regulated (tariffed) price. At the time of its disbanding in mid-2009, WN charged C$36.95+GST per month to members who signed up for home Internet service (3~5 Mbit/s down/720 kbit/s up), which was less than Bell and Rogers charged for their high-speed Internet access service. WN Business service was $59.95 a month. Wireless Nomad was one of the few ISPs in Canada that did not ban its residential subscribers from operating servers. Port 25 was also open for outgoing traffic.
The service covered several areas, mainly in downtown Toronto.
In October 2006, inspired by the fictional narrative in Cory Doctorow's Someone Comes to Town, Someone Leaves Town, the co-op deployed a large antenna in Toronto's Kensington Market, covering about one quarter of the neighborhood with free WiFi Internet. The antenna and WiFi gear was removed from Kensington and installed on the rooftop of Linuxcaffe (named after the Linux Operating system) on the corner of Harbord St. and Grace St. in downtown Toronto in June 2008.
WN used Free and open source software exclusively for its servers, Web site, and wireless routers. The servers ran Gentoo Linux, and the Linksys WRT54GL routers at each location ran OpenWrt, ChilliSpot, and OpenVPN. Wi-Fi mesh networking using OLSR was also part of WN's deployment, with several small mesh networks in use in Toronto.
WN's servers were hosted by the Toronto Community Co-location Project in downtown Toronto from January, 2005 until May 2008.
Colan Schwartz wrote the billing system, Ron Goulard installed antennas and configured equipment, and Jorge Torres-Solis wrote custom firmware for the routers.
Wireless Nomad was a Community Partner with the Canadian Research Alliance for Community Innovation and Networking (CRACIN) and the Community Wireless Infrastructure Research Project (CWIRP) through Prof. Andrew Clement and Matthew A. Wong (graduate student) with the University of Toronto Faculty of Information (formerly Faculty of Information Studies).
In 2008, the co-op filed a submission to the CRTC in support of the Canadian Association of Internet Providers in the Bell throttling issue.
References
External links
Wireless Nomad Blog
Jorge Torres-Solis. Personal page : Curriculum Vitae
Telecom Review Panel Submission
Wireless Nomad on Ohloh
Media cooperatives in Canada
Internet service providers of Canada
Companies based in Toronto |
5205352 | https://en.wikipedia.org/wiki/MSX-DOS | MSX-DOS | MSX-DOS is a discontinued disk operating system developed by Microsoft for the 8-bit home computer standard MSX, and is a cross between MS-DOS 1.25 and CP/M-80 2.
MSX-DOS
MSX-DOS and the extended BASIC with 3½-inch floppy disk support were simultaneously developed by Microsoft and Spectravideo as a software and hardware standard for the MSX home computer standard, to add disk capabilities to BASIC and to give the system a cheaper software medium than Memory Cartridges, and a more powerful storage system than cassette tape. The standard BIOS of an unexpanded MSX computer did not have any floppy disk support, so the additional floppy disk expansion system came with its own BIOS extension ROM (built-in on the disk controller) called the BDOS. Spectravideo also released an MSX-DOS disk in conjunction with the SVI-707 which could be loaded into an MSX system. Once MSX-DOS has been loaded, the system searches the MSX-DOS disk for the COMMAND.COM file and loads it into memory. It not only added floppy disk support commands to MSX BASIC, but also a booting system, with which it was possible to boot a real disk operating system. In that case, the BDOS bypassed the BASIC ROMs, so that the whole 64 KB of address space of the Z80 microprocessor inside the MSX computer could be used for the DOS or for other boot-able disks, for example disk based games. At the same time, the original BIOS ROMs could still be accessed through a "memory bank switch" mechanism, so that DOS-based software could still use BIOS calls to control the hardware and other software mechanisms the main ROMs supplied. Also, due to the BDOS ROM, basic file access capabilities were available even without a command interpreter by using extended BASIC commands.
At initial startup, COMMAND.COM looks for an optional batch file named AUTOEXEC.BAT and, if it exists, executes the commands specified in there. If MSX-DOS is not invoked and Disk BASIC starts, a BASIC program named "AUTOEXEC.BAS" will be carried out instead, if present.
One major difference between MSX-DOS and MS-DOS 2.x was that MSX-DOS did not use the "boot sector" on the floppy to boot, but instead booted using the BDOS ROM routines, and, in a fashion much like MS-DOS 1.25, it used the FAT ID value from the first byte of the FAT to select file system parameter profiles for its FAT12 file system instead of from the BIOS Parameter Block (BPB) in the boot sector. Also, because there could be more than one floppy disk controller in two or more cartridge slots, MSX-DOS could boot from several different floppy disk drives. This meant that it was possible to have both, a 5¼" floppy disk drive and a 3½" disk drive, and the user could boot from either one of them depending on which drive had a bootable floppy in it.
Commands
The following is a list of internal commands supported by MSX-DOS.
BASIC
COPY
DATE
DEL
DELETE
DIR
ERASE
FORMAT
MODE
PAUSE
REM
REN
RENAME
TIME
TYPE
VERIFY
Development history
On 10 August 1983, Paul Allen called Tim Paterson, original author of 86-DOS and MS-DOS 1.x, asking him to do a "Z80 version of MS-DOS" for the MSX standard. At the time, Paterson was busy trying to get the first product of his startup Falcon Systems ready to go, so he suggested a few other developers, but Allen said he had already asked. Allen was in a hurry to get it done and nobody else could meet his timeline. Allen and Paterson finally agreed, and on 17 August, they signed an agreement to do "Z80 MS-DOS 1.25" for US$100,000 and the rights for Paterson's company to distribute MS-DOS 2.0, 2.5, and 3.0 with a hardware product without royalty.
For Paterson, this was mostly a translation process. He had already written a Z80-to-8086 assembly language translation program (TRANS.COM). In this case, he was manually translating in the other direction. Because MS-DOS 1.x was modelled after CP/M's API and was able to run CP/M applications that had been source-level translated to 8086, that would mean, MSX-DOS would be able to run CP/M programs directly.
For this project, Paterson also wrote a Z80 emulator that ran under MS-DOS, which would allow him to do the entire development project under MS-DOS. The MSX-DOS he was writing had an I/O System layer, that interfaced directly to the I/O System layer of the MS-DOS machine, that was running the emulation. This gave MSX-DOS direct access and control of the disk format. Most of the core code was file management, so this was necessary to test it out.
By 2 October 1983, he had Microsoft BASIC and Microsoft M80 macro assembler running under MSX-DOS. He finished coding COMMAND.COM a few days later. He worked out some bugs and demonstrated MSX-DOS to Paul Allen on 11 October. The beta test version was officially delivered on 26 October 1983. It included an easter egg, that printed Paterson's name. The name was encoded with FAT code, so it could not be found by simply searching the file. After delivery of the beta version, the code was sent to ASCII in Japan. They created the I/O System for the MSX machine. That code was developed by Jay Suzuki. He figured out the easter egg and added his name to it.
ASCII was having problems getting MSX-DOS working on the actual MSX machine. They had not provided an actual MSX machine to Paterson, and instead flew him to Tokyo on 28 January 1984 to help them. It turned out that ASCII had been modifying the code without telling Paterson, so they were not working from the same code base. Paterson spent three days in Tokyo figuring out the problems and came back to Seattle.
Chris Larson from Microsoft and Jay Suzuki visited Paterson in Seattle at the end of February and early March 1984. They brought an MSX machine with an in-circuit emulator (ICE) for debugging. They got everything working and on 23 April 1984, Microsoft accepted delivery and made the final payment for MSX-DOS to Paterson.
At the time MSX-DOS was written, there was only one popular disk operating system for 8-bit Intel 8080 compatible microprocessors, which was Digital Research's CP/M-80 system. It was also often used with Z80 systems, because the Z80 used an extended 8080 architecture. Microsoft's own disk operating system was also inspired by CP/M.
To be able to run (slightly modified) CP/M software Microsoft decided to implement functionality similar to major parts of the CP/M BIOS, routines that CP/M systems used to do specific disk operating tasks, such as opening files, etc. Instead of basing the command processor on CP/M's CCP, which was known for some user unfriendliness, a command line interpreter (COMMAND.COM) based on its MS-DOS counterpart was used. Microsoft also chose its own FAT12 file system over CP/M's filing methods. This ensured that MSX-DOS floppies could be used on an MS-DOS machine, and that only one single formatting and filing system would be used. This was an important decision, because CP/M disks were often not interchangeable between machines, incompatible disk formatting schemes being a factor in this.
Microsoft also added a standard set of disk commands to MSX-DOS that were compatible with MS-DOS but not with CP/M. Finally they converted their pipelining system from MS-DOS to MSX-DOS. The resulting DOS was a system that was much user-friendlier than CP/M, but was (in principle) compatible with major CP/M software packages such as WordStar, Turbo Pascal and the "M80" assembler and "L80" linker.
Improved versions
Like MS-DOS 1.25, the first version of MSX-DOS did not have subdirectories, but in 1988 it evolved to version 2, offering facilities such as subdirectories, memory management and environment strings. Later versions of MSX computers (MSX-2) added an internal real-time clock, which MSX-DOS could use for time stamping files.
Commands
The following commands are supported by MSX-DOS version 2.
ASSIGN
ATDIR
ATTRIB
BASIC
BUFFERS
CD
CHDIR
CHKDSK
CLS
COMMAND2
CONCAT
COPY
DATE
DEL
DIR
DISKCOPY
ECHO
ERA
ERASE
EXIT
FIXDISK
FORMAT
HELP
MD
MKDIR
MODE
MOVE
MVDIR
PATH
PAUSE
RAMDISK
RD
REM
REN
RENAME
RMDIR
RNDIR
SET
TIME
TYPE
UNDEL
VER
VERIFY
VOL
XCOPY
XDIR
In addition, ASCII provided the following MSX-DOS2 Tools.
ADDAUX
BEEP
BIO
BODY
BSAVE
CAL
CALC
DUMP
EXPAND
GREP
HEAD
KEY
LIST
LS
MENU
MORE
PATCH
SLEEP
SORT
SPEED
TAIL
TR
UNIQ
VIEW
WC
See also
SymbOS
86-DOS
MIDAS
DOS Plus
References
Discontinued Microsoft operating systems
Disk operating systems
CP/M
MSX
MSX-DOS
1984 software |
29367248 | https://en.wikipedia.org/wiki/Nook%20Color | Nook Color | The Nook Color is a tablet computer/e-reader that was marketed by Barnes & Noble. A tablet with multitouch touchscreen input, it is the first device in the Nook line to feature a full-color screen. The device is designed for viewing of books, newspapers, magazines, and children's picture books. A limited number of the children's books available for the Nook Color include interactive animations and the option to have a professional voice actor read the story. It was announced on 26 October 2010 and shipped on 16 November 2010. Nook Color became available at the introductory price of US$249. In December 2011, with the release of the Nook Tablet, it lowered to US$169. On 12 August 2012, the price lowered to US$149. On 4 November 2012, the price was further lowered to US$139. The tablet ran on Android.
As of December 2012, Barnes and Noble discontinued the Nook Color in favor of the Nook HD and Nook HD+.
Design
The device was designed by Yves Behar from fuseproject. Its frame is graphite in color, with an angled lower corner intended to evoke a turned page. The soft back is designed to make holding the device more comfortable.
Features
The Nook color has a 1024x600 resolution multi-touch touchscreen LCD display, presenting a very vivid image, as opposed to the original Nook's secondary touchscreen. It does not feature an electronic paper display, making it a tablet computer and an e-reader. It has a customized display with color options, six font sizes, and Internet browsing over Wi-Fi, as well as a built-in media player that supports audio and video. The Nook Color allows installing applications approved by Barnes & Noble, with the company planning to provide tools for third-party software developers and an app store. Applications pre-loaded on the Nook Color include Chess, Sudoku, crossword puzzles, Pandora Radio, and a media gallery for viewing pictures and video.
As with the prior Nook, the Nook Color provides a "LendMe" feature allowing users to share some books with other people depending upon licensing by the book's publisher. The purchaser is permitted to share a book once with one other user for up to two weeks. The other users may view the borrowed book using a Nook, Nook Color, or Barnes & Noble's free reader software on any other device running iOS (iPhone, iPod Touch, iPad), BlackBerry OS, Windows, Mac OS X, or Android. Adobe Digital Editions installed on Laptops paired to the Nook Color enables downloads from public libraries (epub). The Share feature on the Nook is only accessible to a small percentage of books purchased from B&N. The Nook works better and easier with purchased publications from B&N than other sources with its easier access.
The Nook Color uses a Texas Instruments ARM Cortex-A8 processor running at 800 MHz. The device has 8 GB of internal memory supplied by Sandisk, but only 5 GB is user-accessible and can store an estimated 6,000 books or 100 hours of audio. As with the original Nook, microSD and microSDHC memory cards can be inserted to expand the Nook Color's memory up to 32 GB. Although Barnes & Noble's official position is that the Nook Color's rechargeable battery is not user-replaceable, replacement instructions and aftermarket batteries are widely available. The original battery is expected to last for 8 hours of continuous use with the wireless turned off, but some replacements have less capacity. The device includes a built-in speaker and a universal 3.5 mm stereo headphone jack. VividView technology is used to enhance image quality when viewing in direct sunlight. Supported file formats include EPUB (DRM and non-DRM), PDF, Microsoft Office formats (DOC, DOCX, XLS, PPT, etc.), TXT, JPEG, GIF, PNG, BMP, MP3, AAC, and MP4.
A firmware update released 25 April 2011 added an app store, email client, Flash support within the web browser, social networking tools, video and audio embedded within books, and performance improvements.
It also has been discovered that the device has hidden Bluetooth connectivity abilities in its wireless chipset, available only after rooting, or flashing a device to the CyanogenMod 7 version of Android for this device.
Third-party apps and firmware update 1.4.1
When the Nook Color and Tablet were first offered, users could install third-party apps. However, days before Christmas 2011, the forced over-the-air "firmware update from Barnes & Noble for the Nook Tablet and Nook Color – 1.4.1 – close[d] the loophole that allowed users to sideload any Android app and also [broke] root for those who'[d] gone that extra step to customize the device."
Reception
Since launch, Nook Color received generally positive reviews, with PC Magazine declaring "Nook Color makes a perfectly amiable reading companion if you want to see your books in full color", while Engadget says "if you're a hardcore reader with an appetite that extends beyond books to magazines and newspapers, the Color is the first viable option we've seen that can support your habit".
In late March 2011, it was reported that the Nook Color had sold close to 3 million units since its launch.
Use as an Android tablet
As an Android device, the Nook Color can be modified to run most Android applications. One common method that unlocks this function is rooting, which grants users root access to the Nook Color's file system. Doing so voids the device's warranty, though it can often be returned to (non-rooted) factory defaults for warranty claims.
In addition to rooting the stock operating system, complete versions of Android are available that can replace the stock firmware and provide functions similar to other Android devices. Android versions 2.2 (Froyo) 2.3 (Gingerbread), 4.0 (Ice Cream Sandwich), and 4.1-4.3 (Jelly Bean) have all been fully ported to the Nook Color and are available as free downloads. KitKat (4.4) is actively being ported, with a few known issues.
Perhaps the most popular such replacement is CyanogenMod 10, an enhanced version of Android 4.1 (Jelly Bean), which, as of September 2013, has over 55,000 reported installations on the Nook Color (codenamed "encore"). CyanogenMod is a community-developed firmware replacement that can be downloaded for free. It can be installed to the internal storage or started from a microSD card, which typically will not affect the internal installation. Neither replacing the stock operating system nor running the operating system from a microSD card requires rooting.
Many people have reported that KitKat (CyanogenMod 11) runs with less lag than CM10.
USB port
The original Nook used a standard Micro USB connection for both battery charging and PC connectivity. The Nook Color uses a modified connector with two depths. The first depth is compatible with Micro USB (5-conductor), while the second depth has 12 conductors. This change was made to increase the amount of power available to charge the larger battery of the Nook Color when using the included cable at 1.9 A as opposed to the 0.5 A limit of standard USB connections.
Because of this, the USB cable included with the Nook Color is physically incompatible with other devices employing standard micro-USB connectors. However, the Nook Color itself is physically compatible with standard micro-USB cords and will still charge at a slower rate on such cords.
References
External links
Official NOOK Developer Program Site
Nook Owner Community Site
CyanogenMod Stable Builds for Nook Color
CyanogenMod Nightly Builds for Nook Color
Nook Color
Android (operating system) devices
Dedicated e-book devices
Products introduced in 2010 |
7589401 | https://en.wikipedia.org/wiki/2006%20California%20Golden%20Bears%20football%20team | 2006 California Golden Bears football team | The 2006 California Golden Bears football team represented the University of California, Berkeley, in the 2006 NCAA Division I FBS football season. They played their home games at California Memorial Stadium in Berkeley, California, and were coached by Jeff Tedford.
The Bears began the season with a number 12 ranking. After sustaining an upset by then number 23-ranked Tennessee in their opening game, the Bears won their next eight games before suffering another upset to unranked Arizona followed by a subsequent loss to then number 4 USC. All of these defeats came in away games. The Bears qualified for a share of the Pac-10 title after USC was upset by rival UCLA the following week. The team made its second Holiday Bowl in three years, blowing out #21 Texas A&M and finishing the season ranked #14.
Preseason
Since Jeff Tedford took the Cal football coaching job after their 1–10 2001 campaign, Cal saw an immense improvement in its football program, having five straight winning seasons from 2002 to 2006.
This particular Bears team, with a wealth of talent returning from their previous season, had a good amount of preseason hype surrounding it, with the preseason AP Poll ranking the Bears 9th, while the Coaches Poll rated them 12th, their highest ranking since 1952. After a season-ending injury in the first game of the 2005 season, sophomore Nate Longshore was named the starting quarterback for the Bears over Joe Ayoob, who had struggled in the 2005 games that he had started. After a very impressive season, Cal also launched a program to officially campaign for running back Marshawn Lynch to win the Heisman Trophy.
In the College GameDay preview on ESPN, Lee Corso predicted the Bears to win the Pac-10 championship over the USC Trojans, and even went as far to say the Bears would win the national championship over West Virginia, saying "they play a tough schedule, but they could lose to Tennessee and still run 11 straight ball games. I like Cal...I'm telling you, it's Cal versus West Virginia, and then Cal wins it; the national title goes to Cal."
Schedule
Game summaries
Tennessee Volunteers
Cal's opener was on the road against the #23 Tennessee Volunteers, a SEC team just coming off of a disappointing 5–6 season under coach Phillip Fulmer. With a hostile crowd of 106,009 watching at Neyland Stadium, Cal was torn to shreds. Tennessee quarterback Erik Ainge went 11 for 18 and passed for 291 yards and 4 touchdowns against the highly touted Cal defense. Vol receiver, Robert Meachem had a breakout game with 5 catches for 182 yards and 2 TD's. Newly appointed starting quarterback Nate Longshore struggled in his first road start, only passing for 85 yards. Heisman candidate Marshawn Lynch was held to 74 yards on 12 carries. While the Bears held Tennessee to 14 first-half points, a quick 80-yard touchdown pass from Ainge to Meachem at the start of the third quarter quickly started a tumble that ended up giving the Volunteers a 35–0 lead by the middle of the third quarter. The Tennessee secondary was then dispatched to finish the Bears off, and Joe Ayoob was substituted for Longshore, putting up 187 yards, including a touchdown. Tennessee racked up 514 yards against the Bears. Ainge later said "This game tonight wasn't just for Tennessee versus California. It was for the South versus the West Coast, the SEC versus the Pac-10."
Because the loss was so lopsided, and because Cal was expected to do highly well this season, many immediately dismissed Cal from any type of national discussion. Cal dropped 13 spots in the AP Poll to #22 and 11 spots in the Coaches Poll to #23, while Tennessee rose twelve spots in the AP poll to #11 and six spots to #17 in the Coaches Poll.
Minnesota Golden Gophers
Cal got its first home game of the season on September 9 against the Minnesota Golden Gophers in front of a crowd of 55,035. The Gophers were 1–0 with their win against Kent State. Minnesota scored on its first drive, but the Bears bounced back with a touchdown in the first and second quarter to receivers Robert Jordan and DeSean Jackson, respectively. The Gophers' Dominic Jones responded with a 99-yard kickoff return touchdown. Eventually California pulled ahead on two touchdowns by Jackson, and led 28–17 at the half, and with two more touchdowns in the third and fourth quarter on rushes by Marshawn Lynch, the Bears put the Gophers away 42–17.
Nate Longshore bounced back in a big way from his previous start, finishing 22 for 31 with 300 yards and 4 touchdowns. He later joked about how it felt good to finally play a fourth quarter.
Lynch finished with 139 yards and 2 touchdowns on 27 carries, while backup running back Justin Forsett gathered 77 rushing yards of his own. Receivers Lavelle Hawkins and DeSean Jackson both finished with over 100 yards, with Jackson gathering three touchdowns as well. Cal got 531 yards of offense in all, compared to Minnesota's 352. Minnesota quarterback Bryan Cupito was 21 for 33 with 243 yards and 2 interceptions, both picked by Cal cornerback Daymeion Hughes.
With this loss, the Golden Gophers lost their first non-conference game in 18 tries, the longest streak in the Big Ten.
Cal rose in the polls one spot in the AP Poll to #21, and two spots in the Coaches Poll to #21, after this win.
Portland State Vikings
Cal played at home the next week against the Portland State Vikings, a Division I Football Championship opponent that was the last team scheduled on the Bears' 2006 schedule. The Vikings were 2–0 going into this game at California Memorial Stadium before a crowd of 61,082. Portland State scored a field goal on their opening drive, but the Bears responded with three more touchdowns before the first quarter ended, one coming from a Daymeion Hughes 30-yard interception. In the second quarter, Marshawn Lynch posted a 71-yard touchdown run, and another touchdown midway through the third quarter put the Bears put 35–3. Portland State scored two quick touchdowns before the half ended, but the half closed with Cal receiver DeSean Jackson scoring on a 27-yard pass. Cal substituted many of its roster into the game in the second half, and neither team scored in the second half, as the Bears won 42–16, bringing their record to 2–1.
Cal brought in 502 total yards, 335 being in the first half. Nate Longshore passed for 225 yards, 2 touchdowns, and 1 interception. Lynch rushed for 112 yards and a touchdown on 6 carries, and Jackson had five catches for 103 yards and a score. Backup Cal quarterback Steve Levy also played, going 7 for 10 with 66 yards. Vikings quarterback Rob Freeman was 12 for 17 with 119 yards and a touchdown.
Despite this, Cal had three turnovers and 9 penalties which put them back over 100 yards. Both Jackson and Hughes seemed confident they could play better the next week in Pac-10 play.
With this win, Cal stayed steady in the AP Poll at #21, and rose one spot in the Coaches Poll to #20.
Portland State would finish the season 7–4, 6–2 in the Big Sky Conference.
Arizona State Sun Devils
On September 23, Cal played their first Pac-10 game of the season. This year, the Pac-10 football officials ruled to have twelve game schedules for its teams for the first time, and nine of those twelve games will be conference games, producing a full round-robin tournament in the conference to determine their champion.
They played against the Arizona State Sun Devils, ranked #22 in the AP Poll and #18 in the Coaches Poll. Many expected the Sun Devils to have a good chance to seriously compete for the conference title this year. They entered with a 3–0 record against non-conference opponents.
Arizona State scored on its opening drive, putting them up 7–0 early in the first quarter. But Cal answered back, putting up five straight touchdowns midway through the second quarter, four being passes by Nate Longshore and the fifth being an 80-yard punt return by DeSean Jackson. Arizona State scored again before the half, and just as the half winded down, Daymeion Hughes intercepted a pass for a score to put Cal up 42–14 when the first half was over. Both teams managed one more touchdown each in the second half to give Cal the 49–21 victory. Sun Devils coach Dirk Koetter confessed after the game, "When the pressure builds, even the routine plays become tough. This is exactly what happened to Cal at Tennessee."
Longshore passed for 270 yards and four touchdowns, while Marshawn Lynch added 124 rushing yards on 17 carries, including a receiving touchdown. Daymeion Hughes had his fifth interception in three games. Cal also controlled the ball for only slightly more than 23 minutes, compared to the Sun Devils 36 minutes, even though they easily outscored them.
After this win, Cal rose one spot in the AP Poll to number 20, and stayed steady in the Coaches Poll at number 20. Arizona State dropped from both polls after the loss.
The Sun Devils finished the season 7–6 (4–5 in the Pac-10), but lost the Hawaii Bowl to Hawaii and fired Koetter soon thereafter and replaced him with Dennis Erickson.
Oregon State Beavers
In week 5, Cal headed to Corvallis, Oregon to play the Oregon State Beavers at Reser Stadium. The Beavers had a 2–1 record going into this game, and had yet to play a Pac-10 matchup this season. Last season, the Beavers played a ranked Cal squad at Memorial Stadium in Berkeley and upset them 23–20, dropping the Bears from the rankings for the first time in over a year.
In front of a rowdy crowd, the Bears dominated the Beavers, scoring four touchdowns and hitting a field goal before the Beavers were able to score a field goal as the first half ended. In the second half, the Beavers scored another field goal and a fourth-quarter touchdown, but the Bears also managed another touchdown and field goal to end the game 41–13.
Nate Longshore had a career-high day, passing for 341 yards and four touchdowns, along with one interception. Marshawn Lynch also had a rushing touchdown, along with two receiving ones. DeSean Jackson and Lavelle Hawkins also scored on receiving touchdowns. Oregon State quarterback Matt Moore had 187 passing yards before being replaced in the fourth quarter.
This win propelled the Bears four spots in the AP Poll to #16, and shot them up three spots in the Coaches Poll to #17. Their record stood at 4–1, with a 2–0 Pac-10 record, while the Beavers left with a 2–2 record, 0–1 in the Pac-10.
With a looming game against the undefeated Oregon Ducks, Coach Tedford confessed that he would "immediately" begin thinking about the showdown.
Despite the blowout, the Beavers finished the season with a 10–4 record, and third in the Pac-10 thanks to a 7–3 conference record, a win in the 2006 Sun Bowl over Missouri, and would score a memorable upset over the former #3 USC Trojans several weeks after the Cal loss.
Oregon Ducks
Cal played the next Saturday, October 7, for California's Homecoming weekend, against the undefeated #11 (in both the AP Poll and Coaches Poll) Oregon Ducks, a team that was 4–0, and 1–0 in Pac-10 play. It was broadcast to the majority of the country on ABC Sports. The game at California Memorial Stadium was played in front of a sold-out crowd of 72,516. Surprisingly, Cal came into the game wearing gold jerseys, instead of their normal blue outfits. Before the game, the Cal student section gave a raucous tribute to Nobel Prize winner George Smoot, who had just won a few days earlier for his research on the Big Bang.
Cal continued to shine in its victory over the Ducks. Quarterback Nate Longshore had a touchdown pass early in the first quarter, but the Ducks bounced back with a field goal. Cal then scored three straight touchdowns, two being passes from Longshore, and one being a memorable 65-yard punt return by receiver DeSean Jackson to put the Bears up 28–3. Oregon scored a touchdown before the half ended, leaving Cal with a 28–10 lead. A field goal and another Longshore touchdown into receiver Robert Jordan's hands gave the Bears a 38–10 lead, but the Ducks countered with a touchdown pass as the third quarter ended. Another touchdown score by both teams in the fourth quarter, Cal's coming from tailback Justin Forsett's 23-yard run, gave the Bears an impressive 45–24 victory.
This was the fifth straight game in which the Bears scored more than 40 points. Longshore was 14 for 26 with 189 yards, four touchdowns (including one on the ground), and one interception. While Lynch was held to just 50 yards because of an ankle injury, Justin Forsett rushed for 163 yards on 27 carries, with one score. DeSean Jackson had two touchdowns, one passing and one 65-yard punt return. Receivers Craig Stevens and Robert Jordan also had touchdown receptions. Oregon quarterback Dennis Dixon threw for 263 yards, two touchdowns, and three interceptions. Running back Jonathan Stewart, the then-leader in Pac-10 rushing yards, was held to just 25 yards on 18 carries by the Cal defense.
This game put Cal in the prime position to challenge USC for the Pac-10 championship. Cal moved up six spots in both the AP Poll and the Coaches Poll to #10 and #11, respectively, after this win, while Oregon dropped seven spots in both polls to #18.
The Ducks are already bowl-eligible, unranked in the AP Poll and in the Coaches Poll, but have a #24 ranking in the BCS standings. They have a 7–3 record (4–3 in the Pac-10).
Washington State Cougars
One week later, Cal traveled to Pullman, Washington to battle the Washington State Cougars at Martin Stadium. The Bears had not won in Pullman since 1979, losing nine straight games. The Cougars sported a 4–2 record, 2–1 in the Pac-10, only falling to the then-AP ranked #4 Auburn Tigers and to the then-AP ranked #3 USC Trojans in a very close contest.
Rather than a blowout victory using the offense, the Bears won this game using defense. Three touchdowns were scored in the first half, two from Marshawn Lynch and one rushing touchdown by quarterback Nate Longshore, and the Cougars were held to just a field goal. After the half, the Bears buckled down on defense and didn't allow the Cougars to score again. Washington State had an opportunity to score in the third quarter, but an official review ruled receiver Dwight Tardy's knee was down on the 1-yard line. On fourth and one, the Bears defense stopped the rush to turn the ball over on downs. A quiet second half gave Cal its six straight win, 21–3. They left with a 6–1 record, and an unblemished 4–0 Pac-10 tally. The Cougars fell to 4–3, 2–2 in the Pac-10.
Longshore threw for 176 yards and two interceptions, along with one rushing touchdown. Marshawn Lynch had 152 yards on 25 carries with Cal's two other scores. Receivers DeSean Jackson and Robert Jordan gathered 60 and 56 yards, respectively. This was the first game of the season Jackson did not score a touchdown. Daymeion Hughes got his sixth interception of the season that set up one of Lynch's touchdowns. The Cal defense stifled the Cougars running game to just 88 yards. The offensive line also kept the Cougars, who led the nation in quarterback sacks with 27, at bay, only letting them sack Longshore once.
After Cal's offense had been in full throttle for five straight games, many began to wonder if the Bears were beginning to struggle. The Bears fell one spot in the AP Poll to #11, and stayed constant at #11 in the Coaches Poll after this win. In the first BCS Standings of the season, released the next day, the Bears sported a #10 position.
The game was not broadcast in the Bay Area, angering many Cal fans. However, Slingbox gave fans an opportunity to watch the game at California Memorial Stadium. After a successful test the day before, over 3,000 fans crammed into Memorial Stadium to watch as Slingbox streamed the game from Washington to the large television in the stadium.
Washington
58,534 people piled into California Memorial Stadium on October 21 to watch Cal play the Washington Huskies. The Huskies, expected to finish last in the Pac-10 at the beginning of the season, exceeded that prediction, starting the season 4–1. Two straight losses, one a near-upset of the USC Trojans at Los Angeles Memorial Coliseum, and the other a shocker against Oregon State, swayed their momentum a bit, yielding a 4–3 record, 2–2 in the Pac-10. Coach Tyrone Willingham had already doubled the Huskies previous year's win total. The Bears were the clear favorite to win the game, but some predicted that the Huskies could provide a "trap game" for the Bears.
Backup Washington quarterback Carl Bonnell subbed in for the injured starting quarterback Isaiah Stanback. The Huskies struck first, driving down field to get a 33-yard field goal. In the second quarter, Bonnell pitched a 49-yard pass to receiver Anthony Russo to give the Huskies a 10–0 lead. After scoring 192 points in their last six games, the Bears only managed a half-ending field goal from Tom Schneider to end the half with Washington up 10–3.
In Cal's first possession of the second half, they drove the ball 62 yards down field, capping it off with Justin Forsett running in for a touchdown. Later in the third quarter, Schneider made a 50-yard field goal to give the Bears their first lead of the game, 13–10. The Huskies did not stay down for long, starting the fourth quarter with Bonnell rushing seven yards for a score. Cal tried to respond, but only got a field goal, leaving them one point behind. After a defensive stop, quarterback Nate Longshore on the last drive of regulation went 82 yards in 12 plays as the game winded down, converting two third-and-tens. Marshawn Lynch ran in 17 yards for the score, with Justin Forsett managed to rush in for the two-point conversion, giving the Bears a 24–17 lead with 1:52 to go. Cal seemed safe, but Bonnell kept pushing upfield on the Huskies possession, and capped it off with a 40-yard Hail Mary pass, batted off the hands of three Cal defenders into the hands of receiver Marlon Wood as the clock ran to zero.
Cal was on offense first in overtime. On the second play, Longshore passed back to Lynch, who ran 22 yards in for the touchdown. On Washington's possession, Cal defender Desmond Bishop intercepted a pass at the goal line and returned it 82 yards to the Washington 17, Cal's fifth interception of the game, giving the Bears a heart stopping 31–24 victory.
Marshawn Lynch jubilantly took a cart used to drive injured players off the field for a joyride in the middle of the stadium following the win. He gathered 150 yards on 21 carries, while sporting two sprained ankles. Longshore was 21 for 36 with 291 yards. Bonnell passed for 284 yards, but the five interceptions proved too costly. Bishop later said of the victory, "We know we can win big games now. We know it's inside us."
In a weekend full of near upsets of the Texas Longhorns, Notre Dame, and the Tennessee Volunteers, the Bears dropped one spot in both polls to #12, but stayed at #10 in the BCS Rankings. The Bears now had a 7–1 record, with a perfect 5–0 Pac-10 record. Washington dropped its third straight, falling to 4–4, 2–3 in the Pac-10.
The Washington Huskies would finish 5–7, 3–6 in the Pac-10.
UCLA Bruins
After a bye week, which saw the Bears take sole position of first place in the Pac-10, rising two spots in the AP Poll to #10 and one spot in the Coaches Poll to #11 (staying steady at #10 in the BCS Standings), Cal faced the UCLA Bruins, a Pac-10 team and rival of the Bears, who sported a 4–4 record, 2–3 in the Pac-10. Last year, in a contest between the unbeaten, ranked teams at the Rose Bowl, the Bears fell to the Bruins 47–40, letting their 40–28 lead in the fourth quarter melt away.
The Bruins starting quarterback Ben Olson was still injured, so backup quarterback Patrick Cowan took his fourth start in California Memorial Stadium. The Bruins had a tough last three weeks after starting 4–1, losing to the Oregon Ducks on the road, losing their chance at an upset over the #10 Notre Dame Fighting Irish in the last minute of the game at Notre Dame Stadium, and losing at home to the Washington State Cougars at home 37–15. The Bears and Bruins have split their last six contests, with the home team winning each occasion.
Cal wore their gold jerseys from the Oregon game four weeks earlier in front of a sold-out crowd of 72,516, Cal's second of the year, and just their fifth from opponents besides Stanford since the 1950s. In the first quarter, Cal drove down the field to score on their first possession of the ballgame. A defensive stop against UCLA and then the Bears gave the Bruins possession, and Cowan ran in for a twelve-yard touchdown. Quarterback Nate Longshore took the helm, driving 76 yards on the next possession, the drive capped off by Marshawn Lynch's brilliant 24-yard catch-and-dash through the UCLA defense. The Bruins drove deep in the next drive, but settled for a field goal, which ended the first half scoring.
On an early third-quarter possession of Cal, Longshore threw to receiver Robert Jordan for a 44-yard touchdown. After UCLA's failed drive, DeSean Jackson, aided by a devastating block by Thomas DeCoud, returned the ensuing punt 72 yards for a touchdown, putting the Bears up 28–10. It was the fourth punt return for touchdown in Jackson's career, tying a Pac-10 record. Lynch ran in for another touchdown early in the fourth quarter. UCLA hauled in two more touchdowns, including a 70-yard run from Chris Markey, while Cal could only get off a field goal. This propelled the Bears to their eighth straight win over the Bruins, 38–24.
Longshore was 20-for-24 with 266 yards and three touchdowns. Lynch ran for 81 yards and a touchdown, and had 45 receiving yards along with another touchdown. Receiver Robert Jordan, Lynch's second cousin, had 86 receiving yards and two touchdowns on 5 catches. Lavelle Hawkins and DeSean Jackson had 60 and 58 receiving yards on 5 and 3 catches, respectively. Backup running back Justin Forsett also ran for 60 yards on just 11 carries. Desmond Bishop and Daymeion Hughes both had interceptions off of Cowan. Hughes's eighth is the current conference best. Cowan threw for 329 yards and was 22-for-40 with the two interceptions. UCLA had 516 total yards, compared to Cal's 433.
The Bears' win gave them an 8–1 record, 6–0 in the Pac-10. They saw a two spot rise in the AP Poll to #8, and another two spot rise in the Coaches Poll to #9. They are also #8 in the BCS standings, increasing two spots from the previous week.
The Bruins are now 5–5, 3–4 in the Pac-10, on the verge of bowl eligibility.
Arizona Wildcats
Cal travelled to Tucson, Arizona Saturday to battle the Arizona Wildcats. The AP ranked #8 Golden Bears went into the game with an 8–1 record, and 6–0 in Pac-10. The Wildcats were 4–5, 2–4 in the Pac-10, coming off of an upset of the then-ranked Washington State Cougars at Pullman the week before. Quarterback Willie Tuitama missed two previous games due to a concussion, and went back in against the Cougars, passing for 159 yards. Junior running back Chris Henry had perhaps the biggest day, running for 94 yards and two touchdowns on a school-record 35 carries. Cal has blanked the Wildcats in their last two meetings, 28–0 last year in Berkeley and in 2004 at Tucson, 38–0. Despite a looming, much-anticipated showdown against the USC Trojans at Los Angeles Memorial Coliseum on November 18 that will probably decide the Pac-10 championship, Cal quarterback Nate Longshore insisted that they were solely focused on the Wildcats this week.
A 95-yard punt return by DeSean Jackson was the first score, putting the Bears up 7–0. Arizona got a field goal next, putting them on the board 7–3. Midway through the first quarter, Marshawn Lynch's 79-yard touchdown run was erased by a block-in-the-back call on Lavelle Hawkins, and the Bears had to settle for a field goal. In the middle of the second quarter, Longshore hit Jackson with a 72-yard touchdown pass to put the Bears up 17–3 at the half.
Arizona would drive the clock down on two drives in the second half and get a touchdown on both drives to tie the game at 17. Within the span of three plays on the second of these drives, Cal had two interceptions by Bernard Hicks and Daymeion Hughes nullified by two separate penalties. Hughes later argued that his case should not have given him a pass interference call, as he was just reaching for the ball. On Cal's next drive, with the score now tied, Longshore threw and the ball was intercepted by Arizona's Antoine Cason for another score, putting the Wildcats up 24–17.
The Bears would respond in their next series with a long pass to Lavelle Hawkins, but he would trip inside the 5-yard line before reaching the goal. Cal could not capitalize, having to put up a field goal, earning them a four-point deficit 24–20. A stop on the Arizona defense gave the Bears the ball again. Longshore apparently hit Jackson for a 63-yard score with just over two minutes left, but a video review ruled his foot out at the 41-yard line. A few plays later, Longshore's pass was batted and intercepted with less than two minutes to clinch the game for Arizona. The student section wildly rushed the field as Cal was taken out of the national title picture for the season.
While Cal out gained the Wildcats in yardage 356 to 262, and though Longshore passed for 250 yards, his three interceptions turned quite costly, as did all of Cal's penalties on crucial plays. Despite their 17–3 mid-third quarter lead, the Bears just made too many mistakes. Hughes later reflected on the loss, "The whole game was like plays going their way."
Cal fell to 8–2, 6–1 in the Pac-10 after this loss, and fell nine spots to #17 in the AP Poll, and eight spots in the Coaches Poll to #17. Their BCS ranking decreased seven spots to #15. Despite the upset, they still had a chance to clinch their Rose Bowl bid with a win against the conference leading USC Trojans the next week.
Arizona would finish 6–6, 4–5 in the Pac-10, but despite being eligible, would not be picked for a bowl game.
USC Trojans
In one of the most important Pac-10 games of the season, Cal traveled to Los Angeles to battle the USC Trojans at the Los Angeles Memorial Coliseum. A victory by either side would ensure a BCS berth for that team, since the winner would be crowned champion of the Pac-10 because they would hold the tiebreaker over the other. While the previous week's loss knocked Cal from the talks of hunting for the national title, a victory would have ensured its first Rose Bowl berth since 1959.
USC had higher aspirations; they were aiming for their third trip to the BCS National Championship Game, and with Michigan's loss to Ohio State earlier, a sweep of their final three games would guarantee them the #2 seed and a date with the Buckeyes. A win against Cal would at the very least clinch the Rose Bowl.
The Trojans and Bears would struggle to a standstill for three quarters. The Golden Bears would take a 9–6 lead into halftime thanks to a safety by Brandon Mebane and a TD throw from Nate Longshore to Lavelle Hawkins, although the margin could have been bigger if not for two turnovers in USC territory. The Trojans would score one field goal in each of the three quarters to tie the game at 9.
In the fourth quarter, USC would break the game open. Two touchdown passes on two consecutive drives from John David Booty to Dwayne Jarrett and Steve Smith respectively provided the game's final scoring. Just as critical was USC's defense shutting down Cal's offense in the second half, allowing only four first downs. USC clinched its fifth consecutive Pac-10 title and at least a trip to the Rose Bowl.
Booty struggled in the first half, but went 13 for 19 for 168 yards and 2 touchdowns in the second. Jarrett and Smith combined for 11 catches and 154 yards and the aforementioned two scores. For Cal, Marshawn Lynch was held to 88 yards on 20 carries, and Longshore went 17–38, throwing for 1 touchdown but also 2 interceptions. The Bears would go into Thanksgiving weekend (their traditional bye week) needing to win The Big Game to wrap up their second Holiday Bowl bid in three years.
Despite the victory, the Trojans would be upset by UCLA in the last week of the season to ruin their BCS title aspirations. This opened the door for the Florida Gators, who beat the Arkansas Razorbacks in the 2006 SEC Championship Game to claim a spot in the 2007 BCS National Championship Game against Ohio State. USC would go on to the Rose Bowl, where they would defeat Big Ten runnerup and BCS-ranked #3 Michigan 32–18. The Trojans would finish the season 11–2 and 7–2 in the Pac-10, ranked 4th in both the AP and Coaches Polls.
Stanford Cardinal
After being defeated by USC two weeks previous, Cal faced their rivals the Stanford Cardinal in the 109th Big Game. It was not a particularly strong performance from the Bears, who only had one offensive touchdown. Nate Longshore threw for 217 yards and one score against a Cardinal defense that had tried their best to stop the California offense. Both Marshawn Lynch and Justin Forsett were held under 100 yards against a defense that was once the worst in the entire NCAA that season.
However, Tom Schneider kicked four field goals (including tying a school record with a 55 yarder in swirling winds) and Syd'Quan Thompson picked up a fumble recovery for a late first half score, and that was just enough for the Bears to stave off a strong performance from T.J. Ostrander and the hapless one win Cardinal squad. Pac-10 defense stats
Since Cal defeated Stanford, and USC was defeated by UCLA the same day, Cal got its first share of the Pac-10 Title since 1975, with Cal and USC both atop the Pac-10 in the final standings. Because USC beat Cal though, the Trojans would be heading to the Rose Bowl while the Golden Bears had to settle for San Diego and the Holiday Bowl.
Stanford finished 1–11, 1–8 in the conference and fired their head coach Walt Harris after the season was over. He was replaced by Jim Harbaugh.
Holiday Bowl
Lynch rushed for 111 yards and two touchdowns while the Golden Bears defense was able to hold the Aggie offense to a scoreless second half. Longshore threw for a touchdown, rushed for one, but also threw an interception while passing for 231 yards in the Golden Bear win. Justin Forsett also ran for 125 yards and scored once himself.
Roster
Player recognition
Team awards
Bear Backer Award (Most Valuable Player – voted on by the team): Marshawn Lynch (offense); Daymeion Hughes (defense)
Dink Artal Award (Player Best Exemplifying Cal Spirit): Byron Storer/Mickey Pimentel
Ken Harvey Award (Academic Commitment & Improvement): Marcus O'Keith
Frank J. Schlessinger Coaches Award (Athletic, Academic, Community): Scott Smith
Ken Cotton Award (Most Courageous Player): Andrew Cameron/Steve Kelly
Everett Merriman Award (Community Service): Eric Beegun
Stub Allison Award (Most Inspirational Player): Desmond Bishop
Joe Roth Award (Courage, Attitude & Sportsmanship): Nu'u Tafisi
Andy Smith Award (Most Big C Playing Time): Daymeion Hughes
Senior Lifter of the Year: Byron Storer
Freshman Lifter of the Year: James Montgomery
Scout Team Player of Year: Chris Guarnero, Jeremy Ross (co-offense), Charles Amadi, Kyle Kirst (co-defense), Brian Holley (special teams)
Berkeley Breakfast Club Award (Outstanding Big Game Player): Tom Schneider
Bob Tessier Award (Most Improved Lineman): Alex Mack (offense); Abu Ma'afala (defense)
Most Improved Player: Thomas DeCoud
Bob Simmons Award (Most Valuable Freshman): Syd'Quan Thompson
Most Valuable Back: Marshawn Lynch (running back); Daymeion Hughes (defensive back)
J. Scott Duncan Award (Most Valuable Special Teams Player): DeSean Jackson/Byron Storer
Cal Coaches Award : Tim Mixon
Brick Muller Award (Most Valuable Lineman): Brandon Mebane (defense); Erik Robertson (offense)
Cort Majors Captains Award (Team Captains): Craig Stevens (offense); Desmond Bishop (defense)
References
California
California Golden Bears football seasons
Pac-12 Conference football champion seasons
Holiday Bowl champion seasons
California Golden Bears football |
37716295 | https://en.wikipedia.org/wiki/INGENIAS | INGENIAS | INGENIAS (Engineering for Software Agents) is an open-source software framework for the analysis, design and implementation of multi-agent systems (MAS).
Technical approach
It adopts since its inception a model-driven engineering (MDE) approach.
Model-driven engineering (MDE) organizes developments around the specification of systems through models that are automatically transformed to generate other artefacts, e.g., code, tests, or documentation.
INGENIAS follows these principles specifying the MAS meta-models that define its modeling language and allow generating automatically its development tools distributed as the INGENIAS Development Kit (IDK).
The INGENME framework, developed as part of the INGENIAS research line, supports this automated development from meta-models of model editors, modules for checking and validation, and generators for code, tests, and documentation.
Details
The INGENIAS approach based on MDE supports research in different areas characterized by the use of modeling languages and requiring flexibility to adapt these to new requirements. In particular, it has been very successful in the areas of Software Agents and Agent-based simulation.
The agent paradigm uses the concept of agent as the basis to develop complex software systems. The field is fairly fragmented with different approaches on how to apply agents and perspectives on the agent concept itself. In this context, INGENIAS emerged as an integrative approach able to support the simultaneous use of different works. This use is based in the facilities to develop new version of its modeling language. The addition, modification, or deletion of concepts just requires modifying its meta-models and then regenerating the development tools using INGENME. This allows researchers focusing on the theoretical tasks of deciding what are the relevant concepts, relationships and attributes of their work, as the infrastructure generates the support tools for their application.
This flexibility has facilitated that INGENIAS addressed new extensions over the years. Two of them are of particular relevance. INGENIAS development process has been one of the few processes of agent-oriented methodologies in having their development process formally specified with SPEM, a language of the Object Management Group (OMG). Currently, there is one development process based on the Unified Process and another based on Scrum.
It also incorporated research on requirements elicitation from an organizational perspective. This work adopts the Activity Theory framework from Social Sciences to develop a modeling language for requirements with a holistic perspective of organizations and their systems, as well as several semi-automated processes for the elicitation and validation of these requirements.
The continuous revision of the INGENIAS modeling language and the tools for its application have made of it one of the most popular methodologies in the literature and actually applied by researchers and engineers. It has been repeatedly included in relevant surveys and comparisons in the field (according to Google Scholar, Elsevier's Scopus and Thomson ISI's Web of Knowledge), e.g., Brian Henderson-Sellers and Paolo Giorgini (2005) or Beydoun et al. (2009).
Its open-source tools organized in the IDK are also very successful in the agent community, as assessed by their number of downloads.
INGENIAS gained the best demo award in the AAMAS 2008 celebrated in Estoril (Portugal).
See also
Model-driven engineering
Software agent
Multi-agent system
Juan Pavón
References
External links
INGENIAS main site
Sourceforge.net webpage
Free software
Free software programmed in Java (programming language)
2002 software
Cross-platform free software
Agent-based software
Data modeling tools
Software frameworks |
1054392 | https://en.wikipedia.org/wiki/David%20Turner%20%28computer%20scientist%29 | David Turner (computer scientist) | David A. Turner (born 26 January 1946) is a British computer scientist. He is best known for designing and implementing three programming languages, including the first for functional programming based on lazy evaluation, combinator graph reduction, and polymorphic types: SASL (1972), Kent Recursive Calculator (KRC) (1981), and the commercially supported Miranda (1985). Miranda had a strong influence on the later Haskell.
He has a Doctor of Philosophy (D.Phil.) from the University of Oxford. He has held professorships at Queen Mary College, London, University of Texas at Austin and the University of Kent at Canterbury, where he has spent most of his career and retains the title of Emeritus Professor of Computation.
He was involved with developing international standards in programming and informatics, as a member of the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 on Algorithmic Languages and Calculi, which specified, maintains, and supports the programming languages ALGOL 60 and ALGOL 68.
He is also an Emeritus Professor at Middlesex University, England.
Publications
Turner, David A. SASL language manual. Tech. rept. CS/75/1. Department of Computational Science, University of St. Andrews 1975.
Another Algorithm for Bracket Abstraction, D. A. Turner, Journal of Symbolic Logic, 44(2):267–270, 1979.
Functional Programming and its Applications, D. A. Turner, Cambridge University Press 1982.
A Parser Generator for use with Miranda, ACM Symposium on Applied Computing, pages 401–407, Philadelphia, USA, Feb 1996.
Elementary Strong Functional Programming, D. A. Turner, in R. Plasmeijer, P. Hartel, eds, "First International Symposium on Functional Programming Languages in Education", Lecture Notes in Computer Science, volume 1022, pages 1–13, Springer-Verlag, 1996.
Ensuring Streams Flow, Alastair Telford and David Turner, in Johnson, ed., "Algebraic Methodology and Software Technology", 6th International Conference, AMAST '97, Sydney Australia, December 1997, Lecture Notes in Computer Science, volume 1349, pages 509–523. AMAST, Springer-Verlag, December 1997.
Ensuring the Productivity of Infinite Structures, A.J.Telford, D.A.Turner, "Technical Report TR 14-97", 37 pages, Computing Laboratory, University of Kent, March 1998. Under submission to "Journal of Functional Programming".
Ensuring Termination in ESFP, A. J. Telford and D. A. Turner, in "15th British Colloquium in Theoretical Computer Science", page 14, Keele, April 1999. To appear in "Journal of Universal Computer Science".
A Hierarchy of Elementary Languages with Strong Normalisation Properties, A.J.Telford, D.A.Turner, "Technical Report TR 2-00", 66 pages, University of Kent Computing Laboratory, January 2000.
Total Functional Programming, Keynote address, pp 1–15, SBLP 2004, Rio de Janeiro, May 2004.
Church's Thesis and Functional Programming, in A. Olszewski ed., "Church's Thesis after 70 years'", pages 518-544, Ontos Verlag, 2006.
References
External links
, University of Kent at Canterbury
Archive copy of an old Staff page at Middlesex University
Miranda functional programming language
Living people
Academics of Queen Mary University of London
Academics of the University of Kent
Academics of Middlesex University
British computer scientists
Members of the Department of Computer Science, University of Oxford
Alumni of Brasenose College, Oxford
Programming language designers
Programming language researchers
1946 births |
6227339 | https://en.wikipedia.org/wiki/Ravindran%20Kannan | Ravindran Kannan | Ravindran Kannan (; born 12 March 1953, Madras) is a Principal Researcher at Microsoft Research India, where he leads the algorithms research group. He is also the first adjunct faculty of Computer Science and Automation Department of Indian Institute of Science.
Before joining Microsoft, he was the William K. Lanman Jr. Professor of Computer Science and Professor of Applied Mathematics at Yale University. He has also taught at MIT, CMU and IISc. The ACM Special Interest Group on Algorithms and Computation Theory (SIGACT) presented its 2011 Knuth Prize to Ravi Kannan for developing influential algorithmic techniques aimed at solving long-standing computational problems. He also served on the Mathematical Sciences jury for the Infosys Prize in 2012 and 2013.
Ravi Kannan did his B.Tech at IIT, Bombay and PhD. at Cornell University. His research interests include Algorithms, Theoretical Computer Science and Discrete Mathematics as well as Optimization. His work has mainly focused on efficient algorithms for problems of a mathematical (often geometric) flavor that arise in Computer Science. He has worked on algorithms for integer programming and the geometry of numbers, random walks in n-space, randomized algorithms for linear algebra and learning algorithms for convex sets.
Key contributions
Among his many contributions, two are
Polynomial-time algorithm for approximating the volume of convex bodies
Algorithmic version for Szemerédi regularity partition
Selected works
Books
2013. Foundations of Data Science. (with John Hopcroft).
Other representative publications
"Clustering in large graphs and matrices," with P. Drineas, A. Frieze, S. Vempala and V. Vinay, Proceedings of the Symposium on Discrete Algorithms, 1999.
"A Polynomial-Time Algorithm for learning noisy Linear Threshold functions," with A. Blum, A. Frieze and S. Vempala, Algorithmica 22:35–52, 1998.
"Covering Minima and lattice point free convex bodies," with L. Lovász, Annals of Mathematics, 128:577–602, 1988.
Awards and honors
Joint Winner of the 1991 Fulkerson Prize in Discrete Mathematics for his work on the volumes of convex bodies.
Knuth Prize 2011 for developing influential algorithmic techniques aimed at solving long-standing computational problems.
In 2017 he became a Fellow of the Association for Computing Machinery.
See also
Szemerédi regularity lemma
Alan M. Frieze
Avrim Blum
László Lovász
References
External links
Ravi Kannan's home page
Distinguished Alumni Awardees 1999, IIT Bombay
Fulkerson Prize Award
Indian computer scientists
20th-century Indian mathematicians
Yale University faculty
Tamil scientists
IIT Bombay alumni
Cornell University alumni
Living people
1953 births
Indian Institute of Science faculty
21st-century Indian mathematicians
Fellows of the Association for Computing Machinery
Knuth Prize laureates |
55174106 | https://en.wikipedia.org/wiki/Ciaran%20Martin | Ciaran Martin | Ciaran Liam Martin, (born 19 September 1974), was the first CEO of the National Cyber Security Centre (NCSC). In September 2020 he was appointed Professor or Practice in the Management of Public Organisations at the Blavatnik School of Government, University of Oxford.
Life
Martin was appointed as head of cyber security at GCHQ in December 2013, he recommended the establishment of a National Cyber Security Centre within the intelligence and security agency. This was agreed by the Government and announced by the Chancellor George Osborne in November 2015. Martin became the first Chief Executive in February 2016, and it became operational in October of that year. On 14 February 2017, the NCSC's new headquarters in Victoria in Central London were opened by Queen Elizabeth II.
Prior to joining GCHQ, Martin was Constitution Director at the Cabinet Office from 2011, helping to agree the framework for the Scottish independence referendum. From 2008 to 2011, he was Director of Security and Intelligence at the Cabinet Office. His public service career has also included a series of roles in elsewhere in the Cabinet Office and in HM Treasury and the National Audit Office (NAO).
He was a member of the GCHQ Board.
He is a past pupil of Omagh CBS, where he was very much seen as an all-rounder, being head-boy, a member of the MacRory Cup Gaelic football squad and keyboard player with indie rock outfit "Some Kind of Wonderful".
He is a graduate of Hertford College, University of Oxford, where he studied History.
Martin was appointed Companion of the Order of the Bath (CB) in the 2020 New Year Honours for services to international and global cyber security.
Martin had planned on resigning in June 2020 but delayed his resignation until August because of the COVID-19 pandemic. He was succeeded by Lindy Cameron.
In December 2002, Martin was the 'phone a friend' for Declan Montague on an episode of the ITV gameshow Who Wants to Be a Millionnaire?.
References
Alumni of Hertford College, Oxford
Civil servants in the Cabinet Office
GCHQ people
GCHQ
Living people
Civil servants in HM Treasury
Civil servants in the National Audit Office (United Kingdom)
Companions of the Order of the Bath
1974 births |
641073 | https://en.wikipedia.org/wiki/Vertical%20bar | Vertical bar | The vertical bar, , is a glyph with various uses in mathematics, computing, and typography. It has many names, often related to particular meanings: Sheffer stroke (in logic), pipe, vbar, stick, vertical line, bar, verti-bar, and several variants on these names.
Usage
Mathematics
The vertical bar is used as a mathematical symbol in numerous ways:
absolute value: , read "the absolute value of x"
cardinality: , read "the cardinality of the set S"
conditional probability: , reads "the probability of X given Y"
determinant: , read "the determinant of the matrix A". When the matrix entries are written out, the determinant is denoted by surrounding the matrix entries by vertical bars instead of the usual brackets or parentheses of the matrix, as in .
distance: , denoting the shortest distance between point to line , so line is perpendicular to line
divisibility: , read "a divides b" or "a is a factor of b", though Unicode also provides special 'divides' and 'does not divide' symbols (U+2223 and U+2224: ∣, ∤)
evaluation: , read "f of x, evaluated at x equals 4" (see subscripts at Wikibooks)
length: , read "the length of the string s"
norm: , read "the norm of the (greater-than-one-dimensional) vector " (note that absolute value is a one-dimensional norm), although a double vertical bar (see below) is more often used to avoid ambiguity.
order: , read "the order of the group G"
restriction: , denoting the restriction of the function , with a domain that is a superset of , to just
set-builder notation: , read "the set of x such that x is less than two". Often, a colon ':' is used instead of a vertical bar
the Sheffer stroke in logic: , read "a nand b"
subtraction: , read "f(x) from a to b", denoting . Used in the context of a definite integral with variable x.
A vertical bar can be used to separate variables from fixed parameters in a function, for example , or in the notation for elliptic integrals.
The double vertical bar, , is also employed in mathematics.
parallelism: , read "the line is parallel to the line "
Norm: , read "the norm (length, size, magnitude etc.) of the vector x". People sometimes use two single bars in analogy to the absolute value, which is a one-dimensional norm.
Propositional truncation (a type former that truncates a type down to a mere proposition in homotopy type theory): for any (read "term of type ") we have (here reads "image of in " and reads "propositional truncation of ")
In LaTeX mathematical mode, the ASCII vertical bar produces a vertical line, and \| creates a double vertical line (a | b \| c is set as ). This has different spacing from \mid and \parallel, which are relational operators: a \mid b \parallel c is set as . See below about LaTeX in text mode.
Physics
The vertical bar is used in bra–ket notation in quantum physics. Examples:
: the quantum physical state
: the dual state corresponding to the state above
: the inner product of states and
Supergroups in physics are denoted G(N|M), which reads "G, M vertical bar N"; here G denotes any supergroup, M denotes the bosonic dimensions, and N denotes the Grassmann dimensions.
Computing
Pipe
A pipe is an inter-process communication mechanism originating in Unix, which directs the output (standard out and, optionally, standard error) of one process to the input (standard in) of another. In this way, a series of commands can be "piped" together, giving users the ability to quickly perform complex multi-stage processing from the command line or as part of a Unix shell script ("bash file"). In most Unix shells (command interpreters), this is represented by the vertical bar character. For example:
grep -i 'blair' filename.log | more
where the output from the grep process (all lines containing 'blair') is piped to the more process (which allows a command line user to read through results one page at a time).
The same "pipe" feature is also found in later versions of DOS and Microsoft Windows.
This usage has led to the character itself being called "pipe".
Disjunction
In many programming languages, the vertical bar is used to designate the logic operation or, either bitwise or or logical or.
Specifically, in C and other languages following C syntax conventions, such as C++, Perl, Java and C#, a | b denotes a bitwise or; whereas a double vertical bar a || b denotes a (short-circuited) logical or. Since the character was originally not available in all code pages and keyboard layouts, ANSI C can transcribe it in form of the trigraph ??!, which, outside string literals, is equivalent to the | character.
In regular expression syntax, the vertical bar again indicates logical or (alternation). For example: the Unix command grep -E 'fu|bar' matches lines containing 'fu' or 'bar'.
Concatenation
The double vertical bar operator "||" denotes string concatenation in PL/I, standard ANSI SQL, and theoretical computer science (particularly cryptography).
Delimiter
Although not as common as commas or tabs, the vertical bar can be used as a delimiter in a flat file. Examples of a pipe-delimited standard data format are LEDES 1998B and HL7. It is frequently used because vertical bars are typically uncommon in the data itself.
Similarly, the vertical bar may see use as a delimiter for regular expression operations (e.g. in sed). This is useful when the regular expression contains instances of the more common forward slash (/) delimiter; using a vertical bar eliminates the need to escape all instances of the forward slash. However, this makes the bar unusable as the regular expression "alternative" operator.
Backus–Naur form
In Backus–Naur form, an expression consists of sequences of symbols and/or sequences separated by '|', indicating a choice, the whole being a possible substitution for the symbol on the left.
Concurrency operator
In calculi of communicating processes (like pi-calculus), the vertical bar is used to indicate that processes execute in parallel.
APL
The pipe in APL is the modulo or residue function between two operands and the absolute value function next to one operand.
List comprehensions
The vertical bar is used for list comprehensions in some functional languages, e.g. Haskell and Erlang. Compare set-builder notation.
Text markup
The vertical bar is used as a special character in lightweight markup languages, notably MediaWiki's Wikitext (in the templates and internal links).
In LaTeX text mode, the vertical bar produces an em dash (—). The \textbar command can be used to produce a vertical bar.
Phonetics and orthography
In the Khoisan languages and the International Phonetic Alphabet, the vertical bar is used to write the dental click (). A double vertical bar is used to write the alveolar lateral click (). Since these are technically letters, they have their own Unicode code points in the Latin Extended-B range: U+01C0 for the single bar and U+01C1 for the double bar.
Some Northwest and Northeast Caucasian languages written in the Cyrillic script have a vertical bar called palochka (), indicating the preceding consonant is an ejective.
Longer single and double vertical bars are used to mark prosodic boundaries in the IPA.
Literature
Punctuation
In medieval European manuscripts, a single vertical bar was a common variant of the virgula used as a period, scratch comma, and caesura mark.
In Sanskrit and other Indian languages, a single vertical mark, a danda, has a similar function as a period (full stop). Two bars || (a 'double danda') is the equivalent of a pilcrow in marking the end of a stanza, paragraph or section. The danda has its own Unicode code point, U+0964.
Poetry
A double vertical bar or is the standard caesura mark in English literary criticism and analysis. It marks the strong break or caesura common to many forms of poetry, particularly Old English verse.
Notation
In the Geneva Bible and early printings of the King James Version, a double vertical bar is used to mark margin notes that contain an alternative translation from the original text. These margin notes always begin with the conjunction "Or". In later printings of the King James Version, the double vertical bar is irregularly used to mark any comment in the margins.
Music scoring
In music, when writing chord sheets, single vertical bars associated with a colon (|: A / / / :|) represents the beginning and end of a section (e.g. Intro, Interlude, Verse, Chorus) of music. Single bars can also represent the beginning and end of measures (|: A / / / | D / / / | E / / / :|). A double vertical bar associated with a colon can represent the repeat of a given section (||: A / / / :|| - play twice).
Encoding
Solid vertical bar vs broken bar
Many early video terminals and dot-matrix printers rendered the vertical bar character as the allograph broken bar . This may have been to distinguish the character from the lower-case 'L' and the upper-case '' on these limited-resolution devices, and to make a vertical line of them look more like a horizontal line of dashes. It was also (briefly) part of the ASCII standard.
An initial draft for a 7-bit character set that was published by the X3.2 subcommittee for Coded Character Sets and Data Format on June 8, 1961, was the first to include the vertical bar in a standard set. The bar was intended to be used as the representation for the logical OR symbol. A subsequent draft on May 12, 1966, places the vertical bar in column 7 alongside regional entry codepoints, and formed the basis for the original draft proposal used by the International Standards Organisation. This draft received opposition from an IBM user group known as SHARE, with its chairman, H. W. Nelson, writing a letter to the American Standards Association titled "The Proposed revised American Standard Code for Information Interchange does NOT meet the needs of computer programmers!"; in this letter, he argues that no characters within the international subset designated at columns 2-5 of the character set would be able to adequately represent logical OR and logical NOT in languages such as IBM's PL/I universally on all platforms. As a compromise, a requirement was introduced where the exclamation mark (!) and circumflex (^) would display as logical OR (|) and logical NOT (¬) respectively in use cases such as programming, while outside of these use cases they would represent their original typographic symbols:
The original vertical bar encoded at 0x7C in the original May 12, 1966 draft was then broken as , so it could not be confused with the unbroken logical OR. In the 1967 revision of ASCII, along with the equivalent ISO 464 code published the same year, the code point was defined to be a broken vertical bar, and the exclamation mark character was allowed to be rendered as a solid vertical bar. However, the 1977 revision (ANSI X.3-1977) undid the changes made in the 1967 revision, enforcing that the circumflex could no longer be stylised as a logical NOT symbol, the exclamation mark likewise no longer allowing stylisation as a vertical bar, and defining the code point originally set to the broken bar as a solid vertical bar instead; the same changes were also reverted in ISO 646-1973 published four years prior.
Some variants of EBCDIC included both versions of the character as different code points. The broad implementation of the extended ASCII ISO/IEC 8859 series in the 1990s also made a distinction between the two forms. This was preserved in Unicode as a separate character at U+00A6 BROKEN BAR (the term "parted rule" is used sometimes in Unicode documentation). Some fonts draw the characters the same (both are solid vertical bars, or both are broken vertical bars). The broken bar does not appear to have any clearly identified uses distinct from those of the vertical bar. In non-computing use — for example in mathematics, physics and general typography — the broken bar is not an acceptable substitute for the vertical bar.
Many keyboards with US or US-International layout display the broken bar on a keycap even though the solid vertical bar character is produced in modern operating systems. This includes many German QWERTZ keyboards. This is a legacy of keyboards manufactured during the 1980s and 1990s for IBM PC compatible computers featuring the broken bar, as such computers used IBM's 8-bit Code page 437 character set based on ASCII, which continued to display the glyph for the broken bar at codepoint 7C on displays from MDA (1981) to VGA (1987) despite the changes made to ASCII in 1977.
The broken bar character can be typed (depending on the layout) as or or on Windows and on Linux. It can be inserted into HTML as
In some dictionaries, the broken bar is used to mark stress that may be either primary or secondary. That is, covers the pronunciations and .
Unicode code points
These glyphs are encoded in Unicode as follows:
(single vertical line)
(single broken line)
(double vertical line ( ): used in pairs to indicate norm)
(Fullwidth form)
(and various other box drawing characters in the range U+2500 to U+257F)
Code pages and other historical encodings
See also
Bar (diacritic)
Triple bar
Notes
References
Punctuation
Typographical symbols
Logic symbols |
18124451 | https://en.wikipedia.org/wiki/Agora%20Center | Agora Center | The Agora Center is a separate institute at the University of Jyväskylä in Central Finland. By its nature, the Agora Center is interdisciplinary and networked. Its purpose is to conduct, coordinate, and administrate top-level research and development that relates to the knowledge society and which places emphasis on the human perspective. The research and development is conducted in the form of fixed-period projects in cooperation with the University of Jyväskylä’s other faculties and separate institutes, businesses, the public sector and other relevant parties. The Agora Center also promotes researcher training through its various research projects. One of the core missions of the Agora Center is to effectively combine research and development with education. The project staff includes a high number of students and post-graduate students.
The Research in the Agora Center is mainly based on Human Technology. Human Technology refers to the human-centred approach to technological systems and methods that takes into account human needs and requirements as well as its implications for humans.
The Agora Center’s administration model follows the requirements of being a separate institute of the University of Jyväskylä and the needs for networking. The directors of the Agora Center are professors from different departments who work in this position in
addition to their departmental commitments and activities. The Agora Center has an interdisciplinary Managing Board, on which all of the faculties of the University of Jyväskylä are represented. The Agora Center also has an international Advisory Board.
History
In order to enable growth of Human Tech research, the facilities of the Agora Human Technology Center were built in 2000. The University simultaneously created the Faculty of Information Technology, combining the Department of Computer Science and Information Systems, the Department of Mathematical Information Technology and the Information Technology Research Institute. The faculty plays the main role in technological research within the Agora concept.
The other essential side of the Agora concept is constituted by the human and social sciences. From the beginning it was evident that to study human technology, an interdisciplinary approach was needed. The networked activity started in the form of
the Psykocenter in 2000. In the initial period of the Center’s history, the network was associated with the Centre of Excellence in Psychology in order to coordinate the human-centred research related to the knowledge society.
However, it soon became clear that for the intended interdisciplinary research to function, the research operations needed an independent entity with its own, in other words autonomous, administration. Thus, in 2002, the University of Jyväskylä established
the Agora Center as a separate institute that also included the Psykocenter. The Agora Center’s first operating period was from 1.2.2002 to 31.7.2005. Nonetheless, in April 2004, the lifespan of Agora Center was extended to the year 2009.
In order to support the functioning and networking of the Agora Center’s interdisciplinary research environments, the Agora Center takes various forms of action. For example, it promotes researcher training and university education, supports researchers in managing interdisciplinary projects, holds open events and forums for multiple disciplines and sectors of society, surveys and promotes the wellbeing of its personnel and conducts Public Relations exercises.
Laboratories
The research and development activities in the Agora Center are carried out in interdisciplinary environments called laboratories. Each of them concentrates on a specific research area. The laboratories offer the environment for versatile research projects.
Mind Tech Laboratory
The Agora Mind Tech Laboratory develops and applies technology for the study of the human mind. Its objectives and strengths are to be found in the way it combines scientific and technological expertise in the fields of psychophysiological measurement
and mathematical information technologies. The three major themes of the laboratory are learning, the evaluation of skills, and the neurological basis of perception. Central applications for this research are the treatment of dyslexia and the intensive training of language skills.
The laboratory utilizes fields such as psychology, information technology, statistics, and cognitive neuroscience. All research projects involve several international researchers and extensive cooperative networks.
Learning Laboratory
The Agora Learning Laboratory’s (ALL) multidisciplinary research center explores the use of virtual learning environments, knowledge in designing powerful new learning environments, pedagogical innovations, evaluation methods for e-learning purposes, and knowledge management. ALL works collaboratively with the Institute for Educational Research. The contexts of the ALL research projects transcend curricula areas, levels of education, and work organizations, with the primary aim of developing e-learning models.
Collaboration between the basic research conducted at the University of Jyväskylä and its practical application in educational and business organizations is highly valued. This integration of high-level scientific knowledge, pedagogical expertise, and product development know-how facilitates the rapid transfer of knowledge into meeting the needs of societies.
Game Laboratory
The multidisciplinary Agora Game Laboratory (AGL) focuses on the design, development, and research of digital games. The AGL offers an open forum for students, researchers, and others who are interested in games and gaming. Its multidisciplinary expertise
covers such areas as educational sciences, computer science, mathematical information technology, and psychology. The research unit designs and studies both serious and entertainment games. Its strengths lie in the game-like learning environments and user-centred game design in which future users (e.g. children, young people, adults, elderly) actively participate in the design process. The laboratory also serves as a network for other actors in the field, and thus forms a link between the University, other game research networks, and businesses.
Industrial IT – tutkimus
The Agora Industrial IT Group organizes innovative interdisciplinary research and development in the area of information and communication technologies. It aims to advance the technological know-how of networked global industries. As the focus of global markets move more from products to services, businesses and industries require new, human-centred solutions based in information
technology. The goal is to develop innovations that arise from international research cooperation, and to transfer new technology into the business world. The strengths of the group are scientific computation and optimisation, and, by extension, solutions to support production and logistics in businesses’ product design. In addition to the technological applications, the projects emphasize developing the human resources and business competitiveness and attend to the role of the person as the user of technology. The group collaborates closely with public sector organizations.
User Psychology Laboratory
The Agora User Psychology Laboratory investigates the behavior of the user with technologies and services in various types of interactive situations. The models of user behavior are significantly valuable in, for example, for device and software design, in education and training, and in industrial R&D processes. Basic research, as well as usability analyses of companies’ products and services, is conducted in the laboratory. Applications can be found in wide areas of human activity. Examples of research topics include mobile services, e-learning, agent technologies, ubiquitous computing, vehicle–driver interaction, surgery simulation, emotional computing and mobile social networks. At the core of the research in the laboratory is a holistic view of the user. Subsequently, the research must be multidisciplinary. The laboratory’s researchers come from the fields of cognitive science, information systems, education, psychology, and art research.
Virtual Environment Center
The Agora Virtual Environment Center (AVEC) conducts research in the fields of computer graphics and virtual reality. It also offers visualisation and graphics programming services to various departments within the University of Jyväskylä and to companies. AVEC also arranges seminars and lecture courses. The foci of AVEC are virtual reality applications, interactions in virtual environments, and programming architectures for virtual reality applications. When selecting and implementing research topics, AVEC’s management emphasizes issues that have technical, financial, and/or scientific significance. The topics also develop the use of virtual reality visualisation in both research and industry. Virtual reality visualisation can be used to improve understanding of complex structures and phenomena. For example, a virtual prototype can reduce the amount of time an industry invests in the product development cycle, and a simulated model of a product can be built and tested in a virtual environment. Virtual reality simulations can significantly improve a product’s quality and functionality.
Innoroad Laboratory
The Agora Innoroad Laboratory serves as a forum for international cooperation and regional collaboration in multidisciplinary research on road traffic and other driver and transportation-related issues. The Lab’s key strengths include the modeling, optimisation, and application of ICT tools in traffic and transportation, as well as research in predicting human error. Among
the projects ongoing in the lab is research funded by the Research Council of Norway on optimising newspaper delivery and waste collection, and the development of new optimization tools for communal logistic applications, funded by the KAKS foundation. A driving simulator is used to study people’s driving skills and what they do under various circumstances. This device also provides training and evaluation for professionals for whom driving is an essential element, such as the drivers of emergency vehicles. The simulator also provides opportunities to evaluate the driving skills of, for instance, old and young drivers, and to develop innovations and new products that enhance driving, traffic, and transportation systems.
Service Sciences Laboratory
Services science is an interdisciplinary field that seeks to bring together knowledge from diverse areas to improve the service operations, performance, and innovation within industry and the public sector. In essence, it represents a melding of technology with an understanding of human behaviour and thinking, as well as business processes and organization. With methods such as scientific computing, overall IT-expertise and knowledge on human behaviour, the Agora Service Sciences Laboratory aims at
creating new service solutions that can range, for instance, from new applications, better understanding of the research subject or new improved operations models to enhanced logistical structures. Much of the research done in the Service Sciences Laboratory has been related to promoting health care and wellbeing of people through an interdisciplinary and multipartner approach. The experts at the Agora Center have collaborated tightly across disciplines with each other, the end-users and other partners to create new innovative solutions to enhance the operations of social and health care services. For example, in the conducted NOVA-project the operations of an emergency duty clinic of Central Finland Health Care District have been enhanced through ICT-based simulations targeting the basic and special health care processes and structure, health care economics and quality of the activities. The successfully met aim was to speed up the patient’s treatment process from the diagnosing phase to the ward.
Online Journal Human Technology
The international journal Human Technology: An Interdisciplinary Journal on Humans in ICT Environments is an open-access, on-line, scholarly journal that explores current topics regarding the interaction between people and technology. Innovative, peer-reviewed articles in Human Technology address the issues and challenges surrounding the role of humans in all areas of our ICT-infused societies. It draws its authors, referees, editors, and readers from both the scientific and business communities around the world.
International Acknowledge 2005–2007
Internationally Awarded Breakthroughs in Research:
Society for Research in Child Development and International Society for the Study of Behavioural Development, awarded professor Lea Pulkkinen
Philips Nordic Prize, awarded professor Heikki Lyytinen
Czech Academy Bernard Bolzano Honorary Medal, awarded Pekka Neittaanmäki
Several Internationally Funded Projects, EU: Marie Curie Excellence Team among others.
International Open Access Online Journal Human Technology Human Technology
The Centre of Excellence, Learning and Motivation, appointed by the Academy of Finland
Center for Scientific Computing and Optimization in Multidisciplinary Applications SOMA
Two FiDiPro-professorships
The Commission and Funding from the Nordic Innovation Centre to coordinate the Nordic Serious Games Development and Research Work.
See also
University of Jyväskylä
References
External links
Agora Center
Journal of Human Technology
Jyväskylä
Jyvaskyla
Buildings and structures in Central Finland |
67143412 | https://en.wikipedia.org/wiki/Later%20Alligator%20%28video%20game%29 | Later Alligator (video game) | Later Alligator is a 2019 point-and-click adventure game developed by American studio Pillow Fight. The game tasks players with exploring Alligator New York City and playing various mini-games to solve a mystery. The game was released on September 18, 2019 for Microsoft Windows, and MacOS. A Linux version was released the following December. A port for the Nintendo Switch was released on March 16, 2021, including a physical release through Fangamer.
Gameplay
In Later Alligator, players control an unnamed individual who is hired by Pat, an anxious and paranoid alligator. Pat believes his family is going to murder him that night at something called "the Event", and asks for help in finding out their plans.
The player can travel to different areas of Alligator New York City to locate various members of Pat's family, asking each of them questions about themselves and the nature of the Event. To gain more information, the player must complete requests or challenges offered by each character, which take the form of different mini-games. These mini-games feature objectives such as achieving a set score on a pinball machine, solving a sliding puzzle, or completing a Tower of Hanoi style game about stacking pancakes. Players will be awarded a badge bearing a family member's likeness upon completing their associated mini-game. Collectible puzzle pieces can also be found scattered around the different environments, which are used in a later mini-game.
Each time the player plays a mini-game or travels to a different area, time passes on the in-game clock. When the clock reaches 8:00 PM, the player will be transported to the game's final mini-game, with the game's ending determined by the number of badges obtained. Upon finishing the game, the player is able to restart the story with all their badges and puzzle pieces retained, giving them the opportunity to play the mini-games they missed and collect the remaining badges to achieve the best ending.
Development
Later Alligator was announced on November 29, 2018, with a set release date of Spring 2019. The game began as a collaboration between SmallBü Animation and Pillow Fight. The developers had met previously, where the idea of creating a game had been mentioned. A few months later, SmallBü pitched the idea for Later Alligator to Pillow Fight, and work on the game was started. SmallBü would animate, pitch ideas and write, and Pillow Fight would build prototypes and program the game. The mini-games were created to use simple concepts, in order to be easier to code and design.
One of the inspirations for the game's visual style came from a collection of photos "of old, ornate homes from the '50s and '70s" which helped contribute to the game's film noir style. According to Pillow Fight and SmallBü, the gameplay was inspired by Japanese visual novels and the Professor Layton series. The game was created over two years, and features over 80,000 frames of animation. One issue was the memory requirements for each animation, which became an issue when faced with a detailed scene with numerous animated elements. This was eventually fixed by packaging each PNG individually. The game was animated in Toon Boom Harmony, with some post-production work done in Adobe After Effects.
On September 18, 2019, Later Alligator launched for Windows and macOS on Steam and Itch.io.
Reception
Later Alligator received positive reviews from critics who praised its animation and humor, but criticized some of its mini games as being repetitive. Polygon's Jeff Ramos praised the character interactions as being filled with charming animation and dialogue. Bryce O'Connor, writing for Adventure Gamers, liked the game's locations, saying "Exploring Alligator New York City is a joy, with each screen—from a dusty antique shop to the dark alley in the “unsavory part of town” to an Alligator Memorial Park—featuring fun details and things to interact with".
CGM's Kris Goorhuis liked the game's atmosphere, saying that "The game feels a certain way to look at – classy coffee counters and seedy bars more akin to small-town diners with colourful faces poised to say goofy things".
Nintendo World Report enjoyed the cartoon visuals of Later Alligator, saying that it provided a contrast with the game's film noir aesthetic. The reviewer also gave notice to the dialogue, writing that "interactions with the family are wacky, short conversations that highlight the idiosyncrasies of their individual personalities. Not one identical, and each brings their own brand of humor making the interactions feel unique". Nintendo Life's Kate Gray appreciated the animation style, saying that the game had a "unique style that's full of personality and charm". They criticized some of the minigame controls, saying that the cursor could be hard to control and could result in failing the minigame.
References
2019 video games
Adventure games
Point-and-click adventure games
Fictional crocodilians
Linux games
MacOS games
Mystery video games
Nintendo Switch games
Video games about reptiles
Video games developed in the United States
Video games set in New York City
Windows games |
28885987 | https://en.wikipedia.org/wiki/The%20Queen%27s%20Award%20for%20Enterprise%3A%20International%20Trade%20%28Export%29%20%282010%29 | The Queen's Award for Enterprise: International Trade (Export) (2010) | 'The Queen's Award for Enterprise: International Trade (Export) (2010)' was awarded on 21 April.
Recipients
The ninety-eight following organisations were awarded in 2010
AMI Exchangers Limited of Hartlepool for charge air coolers and heat exchangers.
Abbot Group Limited of Aberdeen for onshore and offshore drilling engineering and rig design.
Aerospace Design & Engineering Consultants Limited of Stevenage, Hertfordshire for design and engineering services to commercial airlines and aircraft leasing companies
Alcatel-Lucent Submarine Networks Limited of London SE10 for telecommunications systems.
Allam Marine Ltd of Melton, Hull for industrial and marine generating sets.
Alperton International Limited of Spennymoor, for engineering goods and services.
Applied Acoustic Engineering Ltd of Great Yarmouth, Norfolk for underwater acoustic positioning, tracking and survey equipment.
Applied Language Solutions Ltd of Oldham, Lancashire for international language services including translation and interpreting.
Ashley Chase Estate Dorchester of Dorset for speciality hand-made English cheeses
Autonomy Corporation of Cambridge for platform technology with a pure software model offering a full spectrum of mission-critical enterprise applications.
Baillie Gifford Overseas Limited of Edinburgh for investment management services.
Balmoral Comtec Ltd of Loirston, Aberdeen for surface and sub-surface buoyancy and elastomer products for the offshore energy sector.
Baring Asset Management Limited of London EC2 for fund management services.
The Binding Site Group Ltd of Kings Heath, Birmingham for immunodiagnostic kits.
Bio Products Laboratory (BPL) of Elstree, Hertfordshire for therapeutic proteins
The Book Depository Limited of Gloucester an online book retailer.
Brompton Bicycle Ltd of Brentford, Middlesex for folding bicycles.
Bupa International of Brighton, East Sussex for private medical insurance.
CarnaudMetalbox Engineering Limited of Shipley, West Yorkshire for can making machinery.
Centrax Ltd of Newton Abbot, Devon for gas turbine generator sets.
Chelton Limited of Marlow, Buckinghamshire for aircraft and ground antennas and related equipment for military and commercial use.
Alfred Cheyne Engineering Limited of Banff, Aberdeenshire, for winches.
ContiTech Beattie Ltd of Ashington, Northumberland for flexible hoses, couplings and fluid transfer systems to the oil & gas industry.
Controlled Therapeutics (Scotland) Ltd of East Kilbride, Lanarkshire for unique polymer delivery system for the precise administration of drugs
Crittall Windows Ltd of Witham, Essex for steel windows and doors.
Dart Sensors Ltd of Exeter, Devon for electrochemical sensors for breath alcohol and toxic gases.
Douglas Equipment Limited of Cheltenham, Gloucestershire for aviation towing tractors & helicopter/aircraft flight deck handlers
Dynex Semiconductor Ltd of Lincoln for high power semiconductor devices and assemblies.
TG Eakin Limited of Comber, County Down, Northern Ireland for disposable medical devices used for the treatment of stoma and wound care patients.
Euravia Engineering & Supply Co Ltd Kelbrook, Lancashire for aero-engine design, overhaul, test and certification services.
FA Premier League of London W1 for sale of TV rights to foreign broadcasters.
First Magazine Limited of London SW1 a publisher of periodicals, special reports and text books.
Future Health Technologies Ltd of Nottingham for its stem cell bank.
Gilbert Gilkes & Gordon Ltd of Kendal, Cumbria for hydro electric turbines and engine cooling pumps.
Hallin Marine UK Ltd of Dyce, Aberdeen, for subsea services to the oil and gas industry.
Imagination Technologies Ltd of Kings Langley, for graphics, video, audio and communication software.
Industrial Penstocks Ltd of Netherton, Dudley, for fluid control devices.
The Innis & Gunn Brewing Company Ltd of Edinburgh for range of oak-aged specialty beers.
Investment Property Databank of London EC1 for provision of portfolio analysis services and financial indices to the investment property industry.
JDR Cable Systems Ltd of Littleport, Cambridgeshire for subsea umbilicals and power cables for the offshore oil and gas and renewable energy industries.
KHL Group LLP of Wadhurst, East Sussex publishers & magazine advertising
Kestrel Liner Agencies Ltd of Basildon, Essex for shipping liner agency and global freight management.
Kilfrost Limited of Newcastle upon Tyne for de/Anti-icing fluids for the global aviation industry.
Latens Systems Ltd of Belfast, Northern Ireland for pay for television software development.
London College of Accountancy of London SE1 for accountancy, business and management education.
McCalls Special Products Limited of Rotherham, for threaded bar and cable systems.
McKinney Rogers International Limited of London SW1 for business execution services.
Metal and Waste Recycling Limited of London N18 Recycling of scrap metal and waste.
Micro Nav Limited of Bournemouth, Dorset for software and related services for airport and air traffic control simulation.
Midsteel Flanges and Fittings Limited of Kingswinford, West Midlands for flanges, butt weld, forged fittings and ancillary piping products.
Moog Components Group Limited of Reading, Berkshire for electrical slip rings and motion control components.
Moog Insensys Ltd (Wind Energy Division) of Southampton, Hampshire for, measurement and analysis systems for the wind energy market.
Naim Audio Ltd of Salisbury, Wiltshire for hi-fi audio and audio-video systems.
Offshore Design Engineering Limited of Kingston upon Thames, for engineering consultancy project management.
Oil Consultants Ltd of Washington, Tyne and Wear for engineering consulting services to the upstream oil industry.
Isabella Oliver Limited of London NW5 for designer and online retailer of women's wear and maternity clothes.
Pace plc of Saltaire, West Yorkshire for set-top boxes & digital home entertainment equipment.
Parker Hannifin Ltd (Domnick Hunter Industrial) of Gateshead, Tyne and Wear for compressed air filtration & gas separation.
Pearson PLC of London WC2 for provision of educational material and technology, consumer books and business information.
Pelam Foods Limited of Chesham, Buckinghamshire for food and drink exports.
Penlon Limited of Abingdon, Oxfordshire for medical devices.
Penn Pharmaceutical Services Limited of Tredegar for pharmaceuticals.
The Penspen Group Limited of Richmond, Surrey for engineering, project management, operations, maintenance and integrity services.
Pipeline Engineering & Supply Co. Ltd of Richmond, North Yorkshire for pipeline pigging and flow assurance products and services to the oil and gas pipeline industry
Power Jacks Limited of Fraserburgh, Aberdeenshire for industrial lifting and positioning equipment.
Powercorp International Limited of London W1 for film and television programmes production.
Prism Ideas of Nantwich, Cheshire for drug development consultancy services and medical communications
Proto Labs Ltd of Telford, Shropshire Prototype injection moulded and CNC machined parts.
RMD Kwikform of Aldridge, West Midlands for hire, sale and engineering design to the construction industry.
Racal Acoustics Limited of Harrow, Middlesex for military communications ancillaries including military headsets.
Sandvik Osprey Limited of Neath, Port Talbot for gas atomised metal powders and controlled expansion alloy products.
Schrader Electronics Ltd of Antrim, County Antrim, Northern Ireland for electronic sensors and for ASIC's for automotive and industrial markets.
SCIPAC Ltd of Sittingbourne, Kent for reagents for medical diagnostic tests.
Select Biosciences of Sudbury, Suffolk. Scientific conferences, training courses and consultancy
SELEX Galileo, Radar and Advanced Targeting unit (UK) of Edinburgh for airborne radar and targeting design, manufacture, supply and support.
Sentec of Cambridge for smart metering and energy management solutions.
Sparrows Offshore Group Ltd of Aberdeen for lifting, handling and fluid power technology and services for the offshore energy industry.
Stannah Stairlifts Limited of Andover, Hampshire for design and manufacture of stair-lifts.
Strategy & Technology Limited of London EC1 for specialist software for digital interactive television.
Sunmark Ltd of Greenford, Middlesex for branded and own label food and drink products.
Syngenta Bioline Production Ltd of Little Clacton, Essex for beneficial insects and mites for pest control in crops.
Tamper Technologies Ltd of Ashbourne, Derbyshire for security and tamper evident labels and tapes to protect products and packaging.
Themis Ltd of Trowbridge, Wiltshire for marketing information services to the global pharmaceutical industry.
United Shield International Limited of Andover, Hampshire for personal ballistic protection.
Vectric Ltd of Feckenham, Redditch, Worcestershire for software development and solutions for computerized craft industry machines.
Vero Software Plc of Cheltenham, for CADCAM software for the mould and die industry.
Walkers Shortbread Limited of Aberlour on Spey for shortbread, oatcakes and other Scottish specialities.
Ward Shoes Ltd of Chapeltown, Sheffield for returned footwear and clothing.
Watkiss Automation Limited of Sandy, Bedfordshire for book binding machinery.
Williams Performance Tenders Ltd of Berinsfield, Oxfordshire for jet-powered ridged inflatable tenders for the marine leisure market.
Scott Wilson Group plc of London SW1 for design and engineering consultancy services.
Winn & Coales International Ltd of London SE27 for anti-corrosion and sealing products.
Wireless Innovation Ltd of Churcham, for satellite and wireless technology services.
Xennia Technology Limited of Letchworth, for ink-jet products and services.
Yellow Octopus Limited of Skipton, North Yorkshire for clothing and footwear.
References
Queen's Award for Enterprise: International Trade (Export)
2010 awards in the United Kingdom |
5724423 | https://en.wikipedia.org/wiki/Horizontal%20market%20software | Horizontal market software | In computer software, horizontal market software is a type of application software that is useful in a wide range of industries. This is the opposite of vertical market software, which has a scope of usefulness limited to few industries. Horizontal market software is also known as "productivity software." Examples of horizontal market software include word processors, web browsers, spreadsheets, calendars, project management applications, and generic bookkeeping applications. Since horizontal market software is developed to be used by a broad audience, it generally lacks any market-specific customizations.
See also
Horizontal market
Vertical market software
Vertical market
Product software implementation method
Enterprise resource planning
References
Software by type
Software |
15907898 | https://en.wikipedia.org/wiki/Dennis%20E.%20Wisnosky | Dennis E. Wisnosky | Dennis E. Wisnosky (born 1943) is an American consultant, writer and former chief architect and chief technical officer of the US DoD Business Mission Area (BMA) within the Office of Business Transformation. He is known as one of the creators and initiators of the Integrated Definition (IDEFs) language, a standard for modeling and analysis in management and business improvement efforts.
Biography
Dennis E. Wisnosky was born in Washington, Pennsylvania. and received his bachelor's degree in physics and mathematics from California University of Pennsylvania, a master's in management science from the University of Dayton, and a master's in electrical engineering from the University of Pittsburgh.
Wisnosky joined the US Air Force Materials Laboratory, Wright-Patterson Air Force Base, in Ohio in 1971, where he headed the computer and information services. In 1976 he became manager of its ICAM program. In 1986 he founded Wizdom Systems and became its chief executive officer. In August 2006 he was appointed chief technical officer (CTO) of the Department of Defense (DoD) Business Mission Area within the office of the Deputy Under Secretary of Defense for Business Transformation (OUSD (BT)).
As of 2013 he left DoD to lead the standards implementation process for FIBO, the financial industry business ontology that is a joint effort of The Enterprise Data Management Council in conjunction with the Object Management Group.
He has received numerous honors for his work, including in May 1997, Fortune magazine recognized Wisnosky as "one of the five heroes of manufacturing", the Federal 100 Award in 2007, the Award for Excellence in Government Leadership in 2012 and more.
Work
Wisnosky has made contributions in the fields of information technology (IT) consulting and training, including business process reengineering and enterprise architecture. His specialty has been deriving solutions to effectively move organizations from their "As-Is" state of inefficiency to their "To-Be" state of achieving strategic and tactical objectives.
Integrated computer-aided manufacturing
Dennis E. Wisnosky and Dan L. Shunk are recognized as co-founders of the ICAM program, which they founded in 1976. This program started as U.S. Air Force funded program for Integrated Computer-Aided Manufacturing, and was brought about by the "needs and pressures in state-of-the-art technologies, economics, increasing human limitations, aerospace design and manufacturing complexity, computer developments, and competition from abroad."
In the 1980s Joseph Harrington focusses CIM on the manufacturing company as a whole. Harrington considered manufacturing a "monolithic function". This book discussed how the functions could interact as a seamless whole. Harrington was helpful to Wisnosky and Shunk in designing the USAF's ICAM program in the mid-1970s, and their work, in turn, influenced Harrington's second book".
Group vice president, entrepreneur and CEO
Beginning in 1980, until asked to join the DoD, Wisnosky was a director and then an officer in public companies, and then founded and successfully exited, a series of his own companies – the Wizdom companies. These organizations specialized in delivering the products and services to manufacturing industries in the areas of robotics, factory control systems and business process reengineering.
Chief technical officer
As chief technical officer, Wisnosky was responsible for providing expert guidance and oversight in the design, development, and modification of the federated architectures supporting the Department's Business Mission Area. This role incorporates oversight of the DoD Business Enterprise Architecture (BEA) – the corporate level systems, processes, and data standards that are common across the DoD, in addition to the business architectures of the services and defense agencies.
As chief architect, Wisnosky ensured that the federated architectures of the BMA fully support the department's vision, mission, strategy and priorities for business transformation, and that each tier of the overall architecture is clearly defined with appropriate focused accountability aligned to the management structure of the DoD. He verifies that the BEA and component architectures remain consistent and compliant with the federal enterprise architecture (FEA), and will support and collaborate with the DoD components to unify architecture planning, development, and maintenance through a federated approach. Wisnosky also serves as an advisor on the development of requirements and extension of DoD net-centric enterprise services in collaboration with the office of the DoD chief information officer. He was the first to introduce service-oriented architecture (SOA) and to the business mission area (BMA) and has established and led an enterprise approach to delivering BI based upon semantic technologies.
Books
Wisnosky has published books and papers in the fields of BPR, semiconductor processing, information technology, robotics and factory controls, management, SOA and semantic technology. He is the originator of the funnel visualization of enterprise control networks. He authored or co-authored 7 books. His book, "Overcoming Funnel Vision", published in 1996, won critical acclaim . His book, "DoDAF Wizdom", written in 2004, remains the definitive how to guide for successfully building enterprise architectures using the DoD Architecture Framework (DoDAF).
Speaker
Regarded to be a technology visionary and an entrepreneur who can engineer and deliver products, he is a frequent speaker on service-oriented architecture (SOA) and semantic technologies. As a private citizen, he has testified before subcommittees of both the U.S. House and Senate on U.S. productivity issues and the quality of American work life.
Publications
Wisnosky has published over 100 papers, and Wisnosky has authored or co-authored 7 books, in the fields of management, computer science, Services Oriented Architecture (SOA), Enterprise Architecture, Knowledge Management, computer-aided design/computer-aided manufacturing (CAD/CAM), electronics, computer-integrated manufacturing (CIM) and the Semantic Technologies/Web. Books:
1977. An overview of the Air Force program for integrated computer aided manufacturing (ICAM). ICAM program prospectus. SME technical paper
1980. The Southfield Report on Computer Integrated Manufacturing: Productivity for the 1980s : Proceedings of a Joint DoD—industry Manufacturing Technology Workshop. With Joseph Harrington, Manufacturing Technology Advisory Group, CAD/CAM Subcommittee, Dept. of Defense, United States.
1981. Computer Integrated Manufacturing the Air Force ICAM Approach. Society of Manufacturing Engineers.
1996. Softlogic: Overcoming Funnel Vision: How and Why IEC 1131 Based Softlogic Frees the Enterprise to Become Agile and Profitable. With Michael Babb.
2000. Beyond the Supply Chain: A Step-by-Step Guide to Radically Improving Healthcare Delivery with Electronic Transactions. With Leon E. Salomon and Anne Jones. Wizdom Systems, Inc., 2000.
2001. BPR wizdom : a practical guide to BPR project management. With Rita C. Feeney. Wizdom Systems, 2001.
2004. Dodaf Wizdom: a Practical Guide to Planning, Managing and Executing Projects to Build Enterprise Architectures using the Department of Defense Architecture Framework. With Joseph Vogel. Wizdom Systems, Inc., 2004. .
Articles, a selection:
1969. "Triangular Sawtooth Sweep for NMR with Provision for Manual Operation or Time Averaging". Review of Scientific Instruments. Volume: 40, Issue: 3 Digital Object Identifier: 10.1063/1.1683981
2008. "DoD Business Mission Area Service-Oriented Architecture to Support Business Transformation. Wisnosky, D., Feldshteyn, D., Mancuso, W., Gough, A., Riutort, E., & Strassman, P. in: The Journal of Defense Software October 2008. ,
2009. "Principles and Patterns at the U.S. Department of Defense". Wisnosky, D. In: SOA Magazine, (Issue XXV). (January 19, 2009).
2010. "Primitives and the Future of SOA: DoD looks to develop a common vocabulary to improve system design". Yasin, R. (February 1, 2010). Government Computer News, Vol 29 (Issue 2), pp. 25–27.
2011. "Engineering Enterprise Architecture: Call to Action" .Wisnosky, D. (January 2011). Common Defense Quarterly'', (Issue 9), pp. 9–14.
References
External links
Radio interview: Federal TechTalk
Video Presentation: FIBO In the Semantic World
1942 births
Living people
People from Washington, Pennsylvania
Enterprise modelling experts
University of Dayton alumni
Swanson School of Engineering alumni |
6328370 | https://en.wikipedia.org/wiki/Classpath | Classpath | Classpath is a parameter in the Java Virtual Machine or the Java compiler that specifies the location of user-defined classes and packages. The parameter may be set either on the command-line, or through an environment variable.
Overview and architecture
Similar to the classic dynamic loading behavior, when executing Java programs, the Java Virtual Machine finds and loads classes lazily (it loads the bytecode of a class only when the class is first used). The classpath tells Java where to look in the filesystem for files defining these classes.
The virtual machine searches for and loads classes in this order:
bootstrap classes: the classes that are fundamental to the Java Platform (comprising the public classes of the Java Class Library, and the private classes that are necessary for this library to be functional).
extension classes: packages that are in the extension directory of the JRE or JDK, jre/lib/ext/
user-defined packages and libraries
By default only the packages of the JDK standard API and extension packages are accessible without needing to set where to find them. The path for all user-defined packages and libraries must be set in the command-line (or in the Manifest associated with the Jar file containing the classes).
Setting the path to execute Java programs
Supplying as application argument
Suppose we have a package called org.mypackage containing the classes:
HelloWorld (main class)
SupportClass
UtilClass
and the files defining this package are stored physically under the directory D:\myprogram (on Windows) or /home/user/myprogram (on Linux).
The file structure looks like this:
When we invoke Java, we specify the name of the application to run: org.mypackage.HelloWorld. However we must also tell Java where to look for the files and directories defining our package. So to launch the program, we use the following command:
where:
java is the Java runtime launcher, a type of SDK Tool (A command-line tool, such as javac, javadoc, or apt)
-classpath D:\myprogram sets the path to the packages used in the program (on Linux, -cp /home/user/myprogram) and
org.mypackage.HelloWorld is the name of the main class
Setting the path through an environment variable
The environment variable named CLASSPATH may be alternatively used to set the classpath. For the above example, we could also use on Windows:
set CLASSPATH=D:\myprogram
java org.mypackage.HelloWorld
The rule is that -classpath option, when used to start the java application, overrides the CLASSPATH environment variable. If none are specified, the current working directory is used as classpath. This means that when our working directory is D:\myprogram\ (on Linux, /home/user/myprogram/), we would not need to specify the classpath explicitly. When overriding however, it is advised to include the current folder "." into the classpath in the case when loading classes from current folder is desired.
The same applies not only to java launcher but also to javac, the java compiler.
Setting the path of a Jar file
If a program uses a supporting library enclosed in a Jar file called supportLib.jar, physically located in the directory D:\myprogram\lib\ and the corresponding physical file structure is:
D:\myprogram\
|
---> lib\
|
---> supportLib.jar
|
---> org\
|
--> mypackage\
|
---> HelloWorld.class
---> SupportClass.class
---> UtilClass.class
the following command-line option is needed:
java -classpath D:\myprogram;D:\myprogram\lib\supportLib.jar org.mypackage.HelloWorld
or alternatively:
set CLASSPATH=D:\myprogram;D:\myprogram\lib\supportLib.jar
java org.mypackage.HelloWorld
Adding all JAR files in a directory
In Java 6 and higher, one can add all jar-files in a specific directory to the classpath using wildcard notation.
Windows example:
java -classpath ".;c:\mylib\*" MyApp
Linux example:
java -classpath '.:/mylib/*' MyApp
This works for both -classpath options and environment classpaths.
Setting the path in a manifest file
If a program has been enclosed in a Jar file called helloWorld.jar, located directly in the directory D:\myprogram, the directory structure is as follows:
D:\myprogram\
|
---> helloWorld.jar
|
---> lib\
|
---> supportLib.jar
The manifest file defined in helloWorld.jar has this definition:
Main-Class: org.mypackage.HelloWorld
Class-Path: lib/supportLib.jar
The manifest file should end with either a new line or carriage return.
The program is launched with the following command:
java -jar D:\myprogram\helloWorld.jar [app arguments]
This automatically starts org.mypackage.HelloWorld specified in class Main-Class with the arguments. The user cannot replace this class name using the invocation . Class-Path describes the location of supportLib.jar relative to the location of the library helloWorld.jar. Neither absolute file path, which is permitted in parameter on the command line, nor jar-internal paths are supported. This means that if the main class file is contained in a jar, org/mypackage/HelloWorld.class must be a valid path on the root within the jar.
Multiple classpath entries are separated with spaces:
Class-Path: lib/supportLib.jar lib/supportLib2.jar
OS specific notes
Being closely associated with the file system, the command-line Classpath syntax depends on the operating system. For example:
on all Unix-like operating systems (such as Linux and Mac OS X), the directory structure has a Unix syntax, with separate file paths separated by a colon (":").
on Windows, the directory structure has a Windows syntax, and each file path must be separated by a semicolon (";").
This does not apply when the Classpath is defined in manifest files, where each file path must be separated by a space (" "), regardless of the operating system.
See also
Java Classloader
Java Module System
References
External links
Note explaining how Java classes are found, on Oracle website
Specification of how to set the Classpath on Oracle site
Classpath |
6523894 | https://en.wikipedia.org/wiki/Cisco%20Security%20Agent | Cisco Security Agent | Cisco Security Agent (CSA) was an endpoint intrusion prevention system made originally by Okena (formerly named StormWatch Agent), which was bought by Cisco Systems in 2003. The software is rule-based and examines system activity and network traffic, determining which behaviors are normal and which may indicate an attack. CSA was offered as a replacement for Cisco IDS Host Sensor, which was announced end-of-life on 21 February 2003. This end of life action was the result of Cisco's acquisition of Okena, Inc., and the Cisco Security Agent product line based on the Okena technology would replace the Cisco IDS Host Sensor product line from Entercept. As a result of this end-of-life action, Cisco offered a no-cost, one-for-one product replacement/migration program for all Cisco IDS Host Sensor customers to the new Cisco Security Agent product line. The intent of this program was to support existing IDS Host Sensor customers who choose to migrate to the new Cisco Security Agent product line. All Cisco IDS Host Sensor customers were eligible for this migration program, whether or not the customer had purchased a Cisco Software Application Support (SAS) service contract for their Cisco IDS Host Sensor products.
CSA uses a two or three-tier client-server architecture. The Management Center 'MC' (or Management Console) contains the program logic; an MS SQL database backend is used to store alerts and configuration information; the MC and SQL database may be co-resident on the same system. The Agent is installed on the desktops and/or servers to be protected. The Agent communicates with the Management Center, sending logged events to the Management Center and receiving updates in rules when they occur.
A Network World article dated 17 December 2009 stated "Cisco hinted that it will end-of-life both CSA and MARS". Full article linked below.
On 11 June 2010, Cisco announced the end-of-life and end-of-sale of CSA. Cisco did not offer any replacement product.
See also
Network Intrusion Prevention System
References
External links
- "Surviving the Cisco CSA Transition" Endpoint Security Whitepaper
Alternative to CSA
End-of-Life Announcement - Cisco Press Release
Cisco Security Agent - Cisco's product page for the Agent software
Cisco IT Case Study about Cisco Security Agent
Cisco IDS Host Sensor Migration Program
EOS and EOL of for the Cisco IDS Host Sensor Product Line
Cisco hinted EOL for CSA - Network World article
Internet Protocol based network software
Computer network security
MacOS security software
Windows security software
Solaris software
Cisco products |
46823030 | https://en.wikipedia.org/wiki/Hubstaff | Hubstaff | Hubstaff is a remote company that created a workforce management software suite that offers proof of work, time-tracking software, and payroll management, along with a remote talent finder and project management software. Founded in 2012 by Dave Nevogt and Jared Brown, today Hubstaff employs a workforce of more than 90 people across the world.
The company values freedom, transparency, customers-first approach, accountability, and attentional control.
Hubstaff was seen as a rising technology company in 2015, when they received a nomination as part of Techpoint’s Mira Awards for The Best of Tech in Indiana. The company also made the Inc. 5000 list in 2018 and 2019.
History
Dave Nevogt and Jared Brown founded Hubstaff after they began to hire freelancers and wanted a better way to manage them. Nevogt was previously the founder of McCordsville-based Innovative Solutions Inc., while Brown had a background as a developer.
Following the creation of the software with the same name as the company, the outsourcing of freelance work became more common with the development of sites such as Elance and oDesk. Hubstaff considered that the use of their software allowed for entrepreneurs and startup companies to focus on the strategic side of the business, rather than operational tasks. The use of freelance management software became more frequent as web-based startups began to outsource the majority of their operational teams.
In 2014, the company appeared in the Huffington Post as a commentator when looking for red flags when recruiting on LinkedIn. The analysis carried out by Hubstaff included spotting spelling or grammatical mistakes, as it can demonstrate a sloppy attitude towards detail and communication.
Hubstaff were nominees for the Best Tech in Indiana Award in the best Tech Startup of the Year category in 2015.
In April 2020, an article was published on The Wall Street Journal website, discussing how companies were seeing increases in productivity while using the software developed by Hubstaff.
In May 2020, Hubstaff's co-founder Jared Brown appeared on NBC's The Today Show and talked about the role of software developed by the company in supporting businesses in the shift to remote work during the COVID-19 pandemic.
References
External links
Companies based in Indianapolis
Software companies based in Indiana
Remote companies
Software companies established in 2012
American companies established in 2012
2012 establishments in Indiana |
545978 | https://en.wikipedia.org/wiki/Experian | Experian | Experian is an Anglo–Irish multinational consumer credit reporting company. Experian collects and aggregates information on over 1 billion people and businesses including 235 million individual U.S. consumers and more than 25 million U.S. businesses.
Based in Dublin, Ireland, the company operates in 37 countries with offices in Brazil, the United Kingdom, and the United States. The company employs approximately 17,000 people and had a reported revenue of US$5.18 billion for the fiscal year ended in March 2020. It is listed on the London Stock Exchange and is a constituent of the FTSE 100 Index. Experian is a partner in the U.K. government's Verify ID system and USPS Address Validation. It is one of the "Big Three" credit-reporting agencies, alongside TransUnion and Equifax.
In addition to its credit services, Experian also sells decision analytic and marketing assistance to businesses, including individual fingerprinting and targeting. Its consumer services include online access to credit history and products meant to protect from fraud and identity theft. Like all credit reporting agencies, the company is required by U.S. law to provide consumers with one free credit report every year.
History
The company has its origins in Credit Data Corporation, a business which was acquired by TRW Inc. in 1968, and subsequently renamed TRW Information Systems and Services Inc.
In November 1996, TRW sold the unit, as Experian, to Bain Capital and Thomas H. Lee Partners. Just one month later, the two firms sold Experian to The Great Universal Stores Limited in Manchester, England, a retail conglomerate with millions of customers paying for goods on credit (later renamed GUS). GUS merged its own credit-information business, CCN, which at the time was the largest credit-service company in the UK, into Experian.
In October 2006, Experian was demerged from GUS and listed on the London Stock Exchange.
In August 2005, Experian accepted a settlement with the Federal Trade Commission (FTC) over charges that Experian had violated a previous settlement with the FTC. The FTC alleged that ads for the "free credit report" did not adequately disclose that Experian customers would automatically be enrolled in Experian's $79.95 credit-monitoring program.
In January 2008, Experian announced that it would cut more than 200 jobs at its Nottingham office.
Experian shut down its Canadian operations on 14 April 2009.
In March 2017, the U.S. Consumer Financial Protection Bureau fined Experian $3 million for providing invalid credit scores to consumers.
In October 2017, Experian acquired Clarity Services, a credit bureau specialising in alternative consumer data.
Operations
In the United States, like the other major credit reporting bureaus, Experian is chiefly regulated by the Fair Credit Reporting Act (FCRA). The Fair and Accurate Credit Transactions Act of 2003, signed into law in 2003, amended the FCRA to require the credit reporting companies to provide consumers with one free copy of their credit report per 12-month period. Like its main competitors, TransUnion and Equifax, Experian markets credit reports directly to consumers. Experian heavily markets its for-profit credit reporting service, FreeCreditReport.com, and all three agencies have been criticised and even sued for selling credit reports that can be obtained at no cost.
Its market segmentation tool, Mosaic, is used by political parties to identify groups of voters. In the British version there are 15 main groups, broken down into 89 hyperspecific categories, from "corporate chieftains" to "golden empty-nesters" which can be taken down to the level of individual postcodes. It was first used by the Labour Party, but then taken up by the Conservatives in the 2015 General Election campaign.
Sales to identity thieves
In 2013 a Vietnamese national, Hieu Minh Ngo, was charged by the U.S. Department of Justice with attempting to sell personally identifiable information on hundreds of thousands of U.S. residents. This information had been allegedly purchased from Experian subsidiary and data aggregator Court Ventures. However, Ngo testified under oath that the information he had sold to identity thieves had actually been acquired from another hacker based in Russia, and not Experian or Court Ventures. Ngo then resold the information he acquired from the Russian hacker through the identity fraud enabling websites Superget.info and Findget.me. The information offered for anonymous sale on these websites included individual's name, address, Social Security number, date of birth, place of work, duration of work, state driver's licence number, mother's maiden name, bank account number(s), bank routing number(s), email account(s) and other account passwords.
2015 data breach
On 1 October 2015 Experian announced that they had discovered a data breach existing between 1 September 2013 and 16 September 2015. As many as 15 million people who used the company's services, among them customers of American cellular company T-Mobile who had applied for Experian credit checks, may have had their private information exposed.
2020 data breach
In 2020 it was revealed that Experian had suffered a further data breach, on this occasion in South Africa. Initially, Experian claimed that the incident had been contained but subsequently this was shown to be untrue. Data on 24 million South Africans was leaked, as well as on nearly 800,000 businesses. Of these, 24,838 had financial details leaked.
2021 data breach
In January 2021 a new leak was revealed in Brazil, with the source being linked to Experian's Brazilian subsidiary Serasa Experian. The breach resulted in data of 220 million citizens (including some already dead) being sold in the web. This is probably the most severe data breach in history, as it includes names, social security numbers, income tax declaration forms, addresses and other private information on nearly all Brazilian citizens. Experian claims there's no evidence that its systems have been compromised, but this lack of evidence doesn't explain it being the only probable source for the data. According to a Brazilian consumer rights foundation, the company has not been handling the breach appropriately.
See also
Compuscan
Credit bureau
Credit rating agency
Equifax
Freecreditreport.com
Identity theft
Intelliscore
TransUnion
Explanatory notes
References
External links
Collection agencies
Companies based in Dublin (city)
Financial services companies established in 1996
Companies listed on the London Stock Exchange
Companies of the Republic of Ireland
Credit scoring
Data brokers
Data companies
Data quality companies
Market research organizations
TRW Inc.
Multinational companies headquartered in the Republic of Ireland
Tax inversions |
56283907 | https://en.wikipedia.org/wiki/Hacker%20%28film%29 | Hacker (film) | Hacker (theatrically released as Anonymous) is a 2016 crime thriller, directed by Akan Satayev, about a group of young hackers who get involved with an online crime group and black market dealers across Toronto, Hong Kong, New York and Bangkok. The cast consisted of Callan McAuliffe, Lorraine Nicholson, Daniel Eric Gold, and Clifton Collins Jr. The story was loosely based on real events. The screenplay was written by Timur Zhaxylykov and Sanzhar Sultan, who also produced in association with Brillstein Entertainment Partners. The film had a limited release in the US (under the title Anonymous), on December 2, 2016. Following that, Sony Pictures released the film on home entertainment.
Plot
As a family of immigrants who move to Canada the Danyliuk's subsequently struggle for money and have to live on welfare.
Alex grew up with few friends and spent his time online, later making money as a "Clicker" generating online traffic to generate revenue for websites, it's poorly paid and monotonous.
He eventually leaves home to live in Toronto, and to study in college. To make extra money to support himself, Alex Danyliuk turns to a life of crime and identity theft, with the help of Sye, a street-wise hustler who introduces him to the world of black market trading, they start with Credit Cards, lost, stolen and fake transactions. Alex gets caught trying to defraud the Royal Bank of Canada, and persuades Curtis, the Banks Head of Security to make a deal and let him go.
After finding success in causing financial market chaos, they gain the attention of Z, a mysterious masked figure, who's the head of an organization known as Anonymous, and a number one target by the FBI.
He also meets Kira, a young female hacker, Alex asks Kira to join him and Sye. They start printing their own Credit cards and dealing in Bitcoin, together they are very successful. Sye gets suspicious of the way Kira is working. Kira suggests that they relocate to Hong Kong leaving Sye behind. Kira sets up a deal with the Mob, which goes badly wrong.
All three relocate to Hong Kong to avoid further attention from the Mob. Things don't always go to plan in Hong Kong, and after a fight in a nightclub they end up in jail, Kira posts bail to get them out. Sye obtains money using an old card which Alex had asked him not to use, and ends in a huge row with Alex asking him to leave.
Alex and Kira go around all the ATM's in Hong Kong pocketing $2.3 million leaving Darkweb calling cards behind after every transaction. Kira wants to retire, but Alex wants to continue. Zed contacts him, and Alex arranges to meet him in person. They meet in an old factory, and are surprised to find Z in a wheelchair and heavily scarred. Alex and Kira agree to work for Z who explains the heavy penalties for failure.
Sye returns to Hong Kong, and makes contact with Alex from their hotel, but the Mob get to the hotel first and kill him before they can return. They continue with the agreed deal with Z to crash the Stock Market. Snipers shoot the Chairman of the Federal Reserve causing the markets to panic and crash. It later becomes apparent that he was an imposter and the assassination was faked.
Alex and Kira are later kidnapped at gunpoint and separated. Alex regains consciousness in an abandoned office, it looks like they've been set up.
Alex gets on a flight to Bangkok, Thailand, where Kira said to meet if anything goes wrong, but she doesnt show up. Alex tries to call his parents but they don't answer, his Mum just misses the call. He starts to rebuild his life and while in an internet café, He used a "Pick Up Card" and the clerk calls the police. While in the internet café, he learns of Z's apparent arrest, and Kira's death. The Police come in and arrest Alex, and he is sent to jail.
After 2 years in jail, and after attempting suicide, he is finally freed by Royal Pardon without any explanation.
At the gates of the prison on his release he is met by Kira, she finally explains the whole story, and that she has been part of a deal she made with the FBI working as an agent to catch Zed. They both drive off together.
Cast
Callan McAuliffe as Alex
Aiden Besu as 6-year-old Alex
Rian Michelsen as 13-year-old Alex
Lorraine Nicholson as Kira
Daniel Eric Gold as Sye
Clifton Collins Jr. as Zed
Zachary Bennett as Curtis head of Bank Security
Vlada Verevko as Alex's Mother
Genadijs Dolganovs as Alex's Father
Kristian Truelsen as Federal Reserve Chairman
Greg Hovanessian as Randy Bickle
Allyson Pratt as Robin the Stripper
James Cade as Chris
Darryl Flatman as Tommy
References
External links
2016 films
Works about computer hacking |
10302338 | https://en.wikipedia.org/wiki/Cone%20of%20Uncertainty | Cone of Uncertainty | In project management, the Cone of Uncertainty describes the evolution of the amount of best case uncertainty during a project. At the beginning of a project, comparatively little is known about the product or work results, and so estimates are subject to large uncertainty. As more research and development is done, more information is learned about the project, and the uncertainty then tends to decrease, reaching 0% when all residual risk has been terminated or transferred. This usually happens by the end of the project i.e. by transferring the responsibilities to a separate maintenance group.
The term Cone of Uncertainty is used in software development where the technical and business environments change very rapidly. However, the concept, under different names, is a well-established basic principle of cost engineering. Most environments change so slowly that they can be considered static for the duration of a typical project, and traditional project management methods therefore focus on achieving a full understanding of the environment through careful analysis and planning. Well before any significant investments are made, the uncertainty is reduced to a level where the risk can be carried comfortably. In this kind of environment the uncertainty level decreases rapidly in the beginning and the cone shape is less obvious. The software business however is very volatile and there is an external pressure to decrease the uncertainty level over time. The project must actively and continuously work to reduce the uncertainty level.
The Cone of Uncertainty is narrowed both by research and by decisions that remove the sources of variability from the project. These decisions are about scope, what is included and not included in the project. If these decisions change later in the project then the cone will widen.
Original research for engineering and construction in the chemical industry demonstrated that actual final costs often exceeded the earliest "base" estimate by as much as 100% (or underran by as much as 50%). Research in the software industry on the Cone of Uncertainty stated that in the beginning of the project life cycle (i.e. before gathering of requirements) estimates have in general an uncertainty of factor 4 on both the high side and the low side. This means that the actual effort or scope can be 4 times or 1/4 of the first estimates. This uncertainty tends to decrease over the course of a project, although that decrease is not guaranteed.
Applications
One way to account for the Cone of Uncertainty in the project estimate is to first determine a 'most likely' single-point estimate and then calculate the high-low range using predefined multipliers (dependent on the level of uncertainty at that time). This can be done with formulas applied to spreadsheets, or by using a project management tool that allows the task owner to enter a low/high ranged estimate and will then create a schedule that will include this level of uncertainty.
The Cone of Uncertainty is also used extensively as a graphic in hurricane forecasting, where its most iconic usage is more formally known as the NHC Track Forecast Cone, and more colloquially known as the Error Cone, Cone of Probability, or the Cone of Death. (Note that the usage in hurricane forecasting is essentially the opposite of the usage in software development. In software development, the uncertainty surrounds the current state of the project, and in the future the uncertainty decreases, whereas in hurricane forecasting the current location of the storm is certain, and the future path of the storm becomes increasingly uncertain). Over the past decade, storms have traveled within their projected areas two-thirds of the time, and the cones themselves have shrunk due to improvements in methodology. The NHC first began in-house five-day projections in 2001, and began issuing such to the public in 2003. It is currently working in-house on seven-day forecasts, but the resultant Cone of Uncertainty is so large that the possible benefits for disaster management are problematic.
History
The original conceptual basis of the Cone of Uncertainty was developed for engineering and construction in the chemical industry by the founders of the American Association of Cost Engineers (now AACE International). They published a proposed standard estimate type classification system with uncertainty ranges in 1958 and presented "cone" illustrations in the industry literature at that time. In the software field, the concept was picked up by Barry Boehm. Boehm referred to the concept as the "Funnel Curve". Boehm's initial quantification of the effects of the Funnel Curve were subjective. Later work by Boehm and his colleagues at USC applied data from a set of software projects from the U.S. Air Force and other sources to validate the model. The basic model was further validated based on work at NASA's Software Engineering Lab.
The first time the name "Cone of Uncertainty" was used to describe this concept was in Software Project Survival Guide.
Implication
Estimates (e.g. on duration, costs or quality) are inherently very vague at the beginning of a project
Estimates and project plans based on estimations need to be redone on a regular basis
Uncertainties can be built into estimates and should be visible in project plans
Assumptions that later prove to be mistakes are major factors in uncertainty
See also
Planning poker
Software development effort estimation
References
Footnotes
Further reading
Bossavit, Laurent (2013), The Leprechauns of Software Engineering.
External links
The Cocomo 2.0 Software Cost Estimation Model
The NASA Software Engineering Laboratory: Manager's Handbook for Software Development
The NASA Software Engineering Laboratory: Manager's Handbook for Software Development
Explanation of Cone of Uncertainty from Construx - Software Development Best Practices
The Cone of Uncertainty and Hurricane Forecasting
The Cone of Uncertainty
Project management |
233055 | https://en.wikipedia.org/wiki/Rate-monotonic%20scheduling | Rate-monotonic scheduling | In computer science, rate-monotonic scheduling (RMS) is a priority assignment algorithm used in real-time operating systems (RTOS) with a static-priority scheduling class. The static priorities are assigned according to the cycle duration of the job, so a shorter cycle duration results in a higher job priority.
These operating systems are generally preemptive and have deterministic guarantees with regard to response times. Rate monotonic analysis is used in conjunction with those systems to provide scheduling guarantees for a particular application.
Introduction
A simple version of rate-monotonic analysis assumes that threads have the following properties:
No resource sharing (processes do not share resources, e.g. a hardware resource, a queue, or any kind of semaphore blocking or non-blocking (busy-waits))
Deterministic deadlines are exactly equal to periods
Static priorities (the task with the highest static priority that is runnable immediately preempts all other tasks)
Static priorities assigned according to the rate monotonic conventions (tasks with shorter periods/deadlines are given higher priorities)
Context switch times and other thread operations are free and have no impact on the model
It is a mathematical model that contains a calculated simulation of periods in a closed system, where round-robin and time-sharing schedulers fail to meet the scheduling needs otherwise. Rate monotonic scheduling looks at a run modeling of all threads in the system and determines how much time is needed to meet the guarantees for the set of threads in question.
Optimality
The rate-monotonic priority assignment is optimal under the given assumptions, meaning that if any static-priority scheduling algorithm can meet all the deadlines, then the rate-monotonic algorithm can too. The deadline-monotonic scheduling algorithm is also optimal with equal periods and deadlines, in fact in this case the algorithms are identical; in addition, deadline monotonic scheduling is optimal when deadlines are less than periods. For the task model in which deadlines can be greater than periods, Audsley's algorithm endowed with an exact schedulability test for this model finds an optimal priority assignment.
Upper bounds on utilization
Least upper bound
proved that for a set of periodic tasks with unique periods, a feasible schedule that will always meet deadlines exists if the CPU utilization is below a specific bound (depending on the number of tasks). The schedulability test for RMS is:
where is the utilization factor, is the computation time for process , is the release period (with deadline one period later) for process , and is the number of processes to be scheduled. For example, for two processes. When the number of processes tends towards infinity, this expression will tend towards:
Therefore, a rough estimate when is that RMS can meet all of the deadlines if total CPU utilization, , is less than 70%. The other 30% of the CPU can be dedicated to lower-priority, non-real-time tasks. For smaller values of or in cases where is close to this estimate, the calculated utilization bound should be used.
In practice, for the process, should represent the worst-case (i.e. longest) computation time and should represent the worst-case deadline (i.e. shortest period) in which all processing must occur.
Upper bound for harmonic task sets
Liu and Layland noted that this bound may be relaxed to the maximum possible value of 1.0, if for tasks , where and , is an integer multiple of , which is to say that all tasks have a period that is not just a multiple of the shortest period, , but instead that any task's period is a multiple of all shorter periods. This is known as an harmonic task set. An example of this would be: . It is acknowledged by Liu and Layland that it is not always feasible to have a harmonic task set and that in practice other mitigation measures, such as buffering for tasks with soft-time deadlines or using a dynamic priority assignment approach may be used instead to allow for a higher bound.
Generalization to harmonic chains
Kuo and Mok showed that for a task set made up of harmonic task subsets (known as harmonic chains), the least upper bound test becomes:
In the instance where no task period is an integer multiple of another, the task set can be thought of as being composed of harmonic task subsets of size 1 and therefore , which makes this generalization equivalent to Liu and Layland's least upper bound. When , the upper bound becomes 1.0, representing full utilization.
Stochastic bounds
It has been shown that a randomly generated periodic task system will usually meet all deadlines when the utilization is 88% or less, however this fact depends on knowing the exact task statistics (periods, deadlines) which cannot be guaranteed for all task sets, and in some cases the authors found that the utilization reached the least upper bound presented by Liu and Layland.
Hyperbolic bound
The hyperbolic bound is a tighter sufficient condition for schedulability than the one presented by Liu and Layland:
,
where is the CPU utilization for each task. It is the tightest upper bound that can be found using only the individual task utilization factors.
Resource sharing
In many practical applications, resources are shared and the unmodified RMS will be subject to priority inversion and deadlock hazards. In practice, this is solved by disabling preemption or by priority inheritance. Alternative methods are to use lock-free algorithms or avoid the sharing of a mutex/semaphore across threads with different priorities. This is so that resource conflicts cannot result in the first place.
Disabling of preemption
The OS_ENTER_CRITICAL() and OS_EXIT_CRITICAL() primitives that lock CPU interrupts in a real-time kernel, e.g. MicroC/OS-II
The splx() family of primitives which nest the locking of device interrupts (FreeBSD 5.x/6.x),
Priority inheritance
The basic priority inheritance protocol promotes the priority of the task that holds the resource to the priority of the task that requests that resource at the time the request is made. Upon release of the resource, the original priority level before the promotion is restored. This method does not prevent deadlocks and suffers from chained blocking. That is, if a high priority task accesses multiple shared resources in sequence, it may have to wait (block) on a lower priority task for each of the resources. The real-time patch to the Linux kernel includes an implementation of this formula.
The priority ceiling protocol enhances the basic priority inheritance protocol by assigning a ceiling priority to each semaphore, which is the priority of the highest job that will ever access that semaphore. A job cannot preempt a lower priority critical section if its priority is lower than the ceiling priority for that section. This method prevents deadlocks and bounds the blocking time to at most the length of one lower priority critical section. This method can be suboptimal, in that it can cause unnecessary blocking. The priority ceiling protocol is available in the VxWorks real-time kernel. It is also known as Highest Locker's Priority Protocol (HLP).
Priority inheritance algorithms can be characterized by two parameters. First, is the inheritance lazy (only when essential) or immediate (boost priority before there is a conflict). Second is the inheritance optimistic (boost a minimum amount) or pessimistic (boost by more than the minimum amount):
In practice there is no mathematical difference (in terms of the Liu-Layland system utilization bound) between the lazy and immediate algorithms, and the immediate algorithms are more efficient to implement, and so they are the ones used by most practical systems.
An example of usage of basic priority inheritance is related to the "Mars Pathfinder reset bug" which was fixed on Mars by changing the creation flags for the semaphore so as to enable the priority inheritance.
Interrupt Service Routines
All interrupt service routines (ISRs), whether they have a hard real-time deadline or not should be included in RMS analysis to determine schedulability in cases where ISRs have priorities above all scheduler-controlled tasks. An ISR may already be appropriately prioritized under RMS rules if its processing period is shorter than that of the shortest, non-ISR process. However, an ISR with a period/deadline longer than any non-ISR process period with a critical deadline results in a violation of RMS and prevents the use of the calculated bounds for determining schedulability of a task set.
Mitigating mis-prioritized ISRs
One method for mitigating a mis-prioritized ISR is to adjust the analysis by reducing the ISR's period to be equal to that of the shortest period, if possible. Imposing this shorter period results in prioritization that conforms to RMS, but also results in a higher utilization factor for the ISR and therefore for the total utilization factor, which may still be below the allowable bound and therefore schedulability can be proven. As an example, consider a hardware ISR that has a computation time, of 500 microseconds and a period, , of 4 milliseconds. If the shortest scheduler-controlled task has a period, of 1 millisecond, then the ISR would have a higher priority, but a lower rate, which violates RMS. For the purposes of proving schedulability, set and recalculate the utilization factor for the ISR (which also raises the total utilization factor). In this case, will change from to . This utilization factor would be used when adding up the total utilization factor for the task set and comparing to the upper bound to prove schedulability. It should be emphasized that adjusting the period of the ISR is for analysis only and that the true period of the ISR remains unchanged.
Another method for mitigating a mis-prioritized ISR is to use the ISR to only set a new semaphore/mutex while moving the time-intensive processing to a new process that has been appropriately prioritized using RMS and will block on the new semaphore/mutex. When determining schedulability, a margin of CPU utilization due to ISR activity should be subtracted from the least upper bound. ISRs with negligible utilization may be ignored.
Examples
Example 1
Under RMS, P2 has the highest rate (i.e. the shortest period) and so would have the highest priority, followed by P1 and finally P3.
Least Upper Bound
The utilization will be: .
The sufficient condition for processes, under which we can conclude that the system is schedulable is:
Because , and because being below the Least Upper Bound is a sufficient condition, the system is guaranteed to be schedulable.
Example 2
Under RMS, P2 has the highest rate (i.e. the shortest period) and so would have the highest priority, followed by P3 and finally P1.
Least Upper Bound
Using the Liu and Layland bound, as in Example 1, the sufficient condition for processes, under which we can conclude that the task set is schedulable, remains:
The total utilization will be: .
Since the system is determined to not be guaranteed to be schedulable by the Liu and Layland bound.
Hyperbolic Bound
Using the tighter Hyperbolic bound as follows:
it is found that the task set is schedulable.
Example 3
Under RMS, P2 has the highest rate (i.e. the shortest period) and so would have the highest priority, followed by P3 and finally P1.
Least Upper Bound
Using the Liu and Layland bound, as in Example 1, the sufficient condition for processes, under which we can conclude that the task set is schedulable, remains:
The total utilization will be: .
Since the system is determined to not be guaranteed to be schedulable by the Liu and Layland bound.
Hyperbolic Bound
Using the tighter Hyperbolic bound as follows:
Since the system is determined to not be guaranteed to be schedulable by the Hyperbolic bound.
Harmonic Task Set Analysis
Because , tasks 2 and 3 can be considered a harmonic task subset. Task 1 forms its own harmonic task subset. Therefore, the number of harmonic task subsets, , is .
Using the total utilization factor calculated above (0.81875), since the system is determined to be schedulable.
See also
Deadline-monotonic scheduling
Deos, a time and space partitioned real-time operating system containing a working Rate Monotonic Scheduler.
Dynamic priority scheduling
Earliest deadline first scheduling
RTEMS, an open source real-time operating system containing a working Rate Monotonic Scheduler.
Scheduling (computing)
References
Further reading
.
, Chapter 6.
.
External links
Mars Pathfinder Bug from Research @ Microsoft
What really happened on Mars Rover Pathfinder by Mike Jones from The Risks Digest, Vol. 19, Issue 49
The actual reason for the Mars Pathfinder Bug, but those who actually dealt with it, rather than someone whose company and therefore stock value depended upon the description of the problem, or someone who heard someone talking about the problem.
Processor scheduling algorithms
Real-time computing |
1443566 | https://en.wikipedia.org/wiki/FreeBASIC | FreeBASIC | FreeBASIC is a free and open source multiplatform compiler and programming language based on BASIC licensed under the GNU GPL for Microsoft Windows, protected-mode MS-DOS (DOS extender), Linux, FreeBSD and Xbox. The Xbox version is no longer maintained.
According to its official website, FreeBASIC provides syntax compatibility with programs originally written in Microsoft QuickBASIC (QB). Unlike QuickBASIC, however, FreeBASIC is a command line only compiler, unless users manually install an external integrated development environment (IDE) of their choice. IDEs specifically made for FreeBASIC include FBide and FbEdit, while more graphical options include WinFBE Suite and VisualFBEditor.
Compiler features
On its backend, FreeBASIC makes use of GNU Binutils in order to produce console and graphical user interface applications. FreeBASIC supports the linking and creation of C static and dynamic libraries and has limited support for C++ libraries. As a result, code compiled in FreeBASIC can be reused in most native development environments.
C style preprocessing, including multiline macros, conditional compiling and file inclusion, is supported. The preprocessor also has access to symbol information and compiler settings, such as the language dialect.
Syntax
Initially, FreeBASIC emulated Microsoft QuickBASIC syntax as closely as possible. Beyond that, the language has continued its evolution. As a result, FreeBASIC combines several language dialects for maximum level of compatibility with QuickBASIC and full access to modern features. New features include support for concepts such as objects, operator overloading, function overloading, namespaces and others.
Newline characters indicate the termination of programming statements. A programming statement can be distributed on multiple consecutive lines by using the underscore line continuation char (_), whereas multiple statements may be written on a single line by separating each statement with a colon (:).
Block comments, as well as end-of-line remarks are supported. Full line comments are made with an apostrophe ', while blocks of commented code begin with /' and end with '/.
FreeBASIC is not case-sensitive.
Graphics library
FreeBASIC provides built-in, QuickBASIC compatible graphics support through FBgfx, which is automatically included into programs that make a call to the SCREEN command. Its backend defaults to OpenGL on Linux and DirectX on Microsoft Windows. This abstraction makes FBgfx graphics code cross-platform compatible. However, FBgfx is not hardware accelerated.
Users familiar with external graphics utilities such as OpenGL or the Windows API can use them without interfering with the built-in graphics library.
Language dialects
As FreeBASIC has evolved, changes have been made that required breaking older-styled syntax. In order to continue supporting programs written using the older syntax, FreeBASIC now supports the following dialects:
The default dialect (-lang fb as a command-line argument) supports all new compiler features and disallows archaic syntax.
The FB-lite dialect (-lang fblite) permits use of most new, non-object-oriented features in addition to older-style programming. Implicit variables, suffixes, GOSUB / RETURN, numeric labels and other features are allowed in this dialect.
The QB dialect (-lang qb) attempts to replicate QuickBASIC behavior and is able to compile many QuickBASIC programs without modification.
Example code
Standard programs, such as the "Hello, World!" program are done just as they were in QuickBASIC.
Print "Hello, World!"
sleep:end 'Comment, prevents the program window from closing instantly
FreeBASIC adds to this with support for object-oriented features such as methods, constructors, dynamic memory allocation, properties and temporary allocation.
Type Vector
Private:
x As Integer
y As Integer
Public:
Declare Constructor (nX As Integer = 0, nY As Integer = 0)
Declare Property getX As Integer
Declare Property getY As Integer
End Type
Constructor Vector (nX As Integer, nY As Integer)
x = nX
y = nY
End Constructor
Property Vector.getX As Integer
Return x
End Property
Property Vector.getY As Integer
Return y
End Property
Dim As Vector Ptr player = New Vector()
*player = Type<Vector>(100, 100)
Print player->getX
Print player->getY
Delete player
Sleep 'Prevents the program window from closing instantly
In both cases, the language is well suited for learning purposes.
References
External links
IDEs
WinFBE - Modern FreeBASIC Editor for Windows
VisualFBEditor - Cross-platform graphical IDE
fbide.freebasic.net — FBIDE Integrated Development Environment for freeBASIC
FBEdit (current) — FBEdit source code editor for FreeBASIC, version 1.0.7.6c
BASIC compilers
Free compilers and interpreters
Object-oriented programming languages
Free computer libraries
Self-hosting software
Articles with example BASIC code
Free software programmed in BASIC
DOS software
Programming tools for Windows
Linux programming tools
Programming languages created in 2004
Software using the GPL license
Programming languages
High-level programming languages
2004 software
BASIC programming language family |
28980403 | https://en.wikipedia.org/wiki/XOS | XOS | XOS may refer to
XOS (operating system), an Android-based operating system
XOS 3.0, 4.0, and 5.0, Red Hat Enterprise Linux derivatives
XOS valuation, a kind of utility function used mainly in mechanism design
Xbox One S, a video game console by Microsoft
Xerox Operating System, an operating system for the XDS Sigma line of computers
Xylooligosaccharide, a polymer of the sugar xylose
See also
XDOS (disambiguation) |
13265248 | https://en.wikipedia.org/wiki/Apollo%20Abort%20Guidance%20System | Apollo Abort Guidance System | The Apollo Abort Guidance System (AGS, also known as Abort Guidance Section) was a backup computer system providing an abort capability in the event of failure of the Lunar Module's primary guidance system (Apollo PGNCS) during descent, ascent or rendezvous. As an abort system, it did not support guidance for a lunar landing.
The AGS was designed by TRW independently of the development of the Apollo Guidance Computer and PGNCS.
It was the first navigation system to use a strapdown Inertial Measurement Unit rather than a gimbaled gyrostabilized IMU (as used by PGNCS). Although not as accurate as the gimbaled IMU, it provided satisfactory accuracy with the help of the optical telescope and rendezvous radar. It was also lighter and smaller in size.
Description
The Abort Guidance System included the following components:
Abort Electronic Assembly (AEA): the AGS computer
Abort Sensor Assembly (ASA): a simple strapdown IMU
Data Entry and Display Assembly (DEDA): the astronaut interface, similar to DSKY
The computer used was MARCO 4418 (MARCO stands for Man Rated Computer) whose dimensions were 5 by 8 by 23.75 inches (12.7 by 20.3 by 60.33 centimeters); it weighed 32.7 pounds (14.83 kg) and required 90 watts of power. Because the memory had a serial access it was slower than AGC, although some operations on AEA were performed as fast or faster than on AGC.
The computer had the following characteristics:
It had 4096 words of memory. Lower 2048 words were erasable memory (RAM), higher 2048 words served as fixed memory (ROM). The fixed and erasable memory were constructed similarly so the ratio between fixed and erasable memory was variable.
It was an 18-bit machine, with 17 magnitude bits and a sign bit. The addresses were 13 bits long; MSB indicated index addressing.
Data words were two's complement and in fixed-point form.
Registers
The AEA has the following registers:
A: Accumulator (18 bit)
M: Memory Register (18 bit), holds data that are being transferred between the central computer and memory
Q: Multiplier-Quotient Register (18 bit), stores the least significant half of result after multiplication and division. It can be also used as extension of Accumulator
Index Register (3 bit): used for index addressing
Other less-important registers are:
Address Register (12 bit): holds the memory address requested by central computer
Operation Code Register (5 bit): holds 5-bit instruction code during its execution
Program Counter (12 bit)
Cycle Counter (5 bit): controls shift instructions
Timers (2 registers): produce the control timing signals
Input Registers: 13 registers
Instruction set
The AEA instruction format consisted of five-bit instruction code, index bit and a 12-bit address.
The computer had 27 instructions:
ADD: The contents of memory location are added to Accumulator A. The contents of the memory location remain unchanged.
ADZ (Add and Zero): The contents of memory are added to Accumulator A. The contents of memory are set to zero.
SUB (Subtract): The contents of memory are subtracted from Accumulator A. The contents of memory remain unchanged.
SUZ (Subtract and Zero): The contents of memory are subtracted from Accumulator A. The contents of memory are set to zero.
MPY (Multiply): The contents of Accumulator A are multiplied by the contents of memory. The most significant part of the product is placed in the Accumulator A, the least significant part is placed in Register Q.
MPR (Multiply and Round): Identical to MPY instruction, the most significant part of the product in Accumulator A is rounded by adding one to the contents of Accumulator A if bit 1 of Q Register equals one.
MPZ (Multiply and Zero): Identical to MPR instruction, the contents of memory are set to zero.
DVP (Divide): The contents of Accumulator A and Register Q that form a dividend are divided by the contents of memory. The quotient is placed in Accumulator A and rounded unless the rounding would cause overflow.
COM (Complement Accumulator): The contents of Accumulator A are replaced with their two's complement. If the contents of the Accumulator A are positive, zero or minus one, the contents remain unchanged.
CLA (Clear and Add): The Accumulator A is loaded from memory. The contents of memory remain unchanged.
CLZ (Clear, Add and Zero): Similar to CLA instruction; the contents of memory are set to zero.
LDQ (Load Q Register): The Q Register is loaded with contents of memory. The contents of memory remain unchanged.
STO (Store Accumulator): The contents of Accumulator A are stored in memory. The contents of Accumulator A remain unchanged.
STQ (Store Q Register): The contents of Q Register are stored in memory. The contents of Q Register remain unchanged.
ALS N (Arithmetic Left Shift): The contents of Accumulator A are shifted left N places.
LLS N (Long Left Shift): The contents of Accumulator A and bits 1 - 17 of Q Register are shifted left as one register N places. The sign of Q Register is made to agree with sign of Accumulator A.
LRS N (Long Right Shift): Similar to LLS, but the contents are shifted right N places.
TRA (Transfer): The next instruction is taken from memory.
TSQ (Transfer and Set Q): The contents of the Q Register are replaced with an address field set to one greater than the location of the TSQ instruction. Next instruction is taken from memory.
TMI (Transfer on Minus Accumulator): The next instruction is taken from memory if the contents of the Accumulator A are negative. Otherwise the next instruction is taken in sequence.
TOV (Transfer on Overflow): If the overflow indicator is set, the next instruction is taken from memory.
AXT N (Address to Index): The Index Register is set to N.
TIX (Test Index and Transfer): If the Index Register is positive, it is decremented by one and the next instruction is taken from memory.
DLY (Delay): Execution stops until a timing signal is received. The next instruction is taken from memory.
INP (Input): The contents of input register specified by address are placed in Accumulator A. The input register is either set to zero or remains unchanged (depending upon the selected register).
OUT (Output): The contents of the Accumulator A are placed in output register specified by address.
Software
First design ideas of the Abort Guidance System did not include the use of the computer but rather a sequencer without any navigation capability. This would be adequate to put the Lunar Module into lunar orbit where the crew would wait for rescue by the Apollo CSM. Later design included a digital computer to provide some autonomy.
The AGS software was written in LEMAP assembly language that uses 27 instructions described above and a set of pseudo-operations used by the assembler.
The main computation cycle was 2 seconds long. This 2-second cycle was divided into 100 segments; each of these segments had a duration of 20 ms. These segments were used for computations that needed to be recalculated every 20 ms (like IMU signal processing, update of PGNCS downlink data, direction cosines update, etc.).
There was also a set of computations that had to be performed every 40 ms (engine commands, external signal sampling, attitude control, etc.).
Other computations were performed every 2 seconds and these equations were divided into smaller groups so they could be recalculated during the remaining (i.e. unused) time of 20 ms segments (e.g. radar data processing, calculation of orbital parameters, computation of rendezvous sequence, calibration of IMU sensors, etc.)
The software for AGS was reviewed many times to find program errors and to reduce the size of the software. There are some known versions of the software that were used for uncrewed and crewed tests.
User interface
The AGS User interface unit was named DEDA (Data Entry and Display Assembly). Its function was entry and readout of data from the AGS. Some of the system's functionality was built into DEDA unlike the DSKY used by AGC.
DEDA had the following elements:
Numeral keys 0 - 9
+ and - sign key
CLR key: clears the entry display and clears the OPR ERR light
ENTER key: for data/address entry
READOUT key: reads the data from the specified address and displays the refreshed data every half second
HOLD key: stops the continuous outputting of data
OPR ERR light: indicates Operator's error
displays are used to enter and read the data
Use of AGS
There are few actual descriptions of the use of the AGS, as a landing abort was never needed during the Apollo missions. There were, however, four cases in which the AGS was used.
Its first use was for testing of the Lunar Module descent stage in Earth orbital flight during the Apollo 9 mission. It was used again in the Apollo 10 mission, following separation of the Lunar Module descent stage prior to the APS burn. An incorrect switch setting leaving AGS in Auto rather than Attitude Hold mode led to a prompt and pronounced deviation in attitude moments before staging. The situation was quickly brought under control.
The next use of the AGS was during the lunar ascent phase of the Apollo 11 mission, when the LM crew performed a sequence of rendezvous maneuvers that resulted in gimbal lock; the AGS was subsequently used to acquire attitude control.
The AGS played an important role in the safe return of Apollo 13 after an oxygen tank explosion left the Service Module crippled and forced the astronauts to use the Lunar Module as a "lifeboat." Supplies of electrical power and water on the LM were limited and the Primary Guidance and Navigation System used too much water for cooling. As a result, after a major LM descent engine burn 2 hours past its closest approach to the Moon to shorten the trip home, the AGS was used for most of the return, including two mid-course corrections.pp. III-17,32,35,40
References
Apollo program hardware
Guidance computers |
39274873 | https://en.wikipedia.org/wiki/Grain%20128a | Grain 128a | The Grain 128a stream cipher was first purposed at Symmetric Key Encryption Workshop (SKEW) in 2011 as an improvement of the predecessor Grain 128, which added security enhancements and optional message authentication using the Encrypt & MAC approach. One of the important features of the Grain family is that the throughput can be increased at the expense of additional hardware. Grain 128a is designed by Martin Ågren, Martin Hell, Thomas Johansson and Willi Meier.
Description of the cipher
Grain 128a consists of two large parts: Pre-output function and MAC. The pre-output function has an internal state size of 256 bits, consisting of two registers of size 128 bit: NLFSR and LFSR. The MAC supports variable tag lengths w such that . The cipher uses a 128 bit key.
The cipher supports two modes of operation: with or without authentication, which is configured via the supplied such that if then authentication of the message is enabled, and if authentication of the message is disabled.
Pre-output function
The pre-output function consists of two registers of size 128 bit: NLFSR () and LFSR () along with 2 feedback polynomials and and a boolean function .
In addition to the feedback polynomials, the update functions for the NLFSR and the LFSR are:
The pre-output stream () is defined as:
Initialisation
Upon initialisation we define an of 96 bit, where the dictates the mode of operation.
The LFSR is initialised as:
for
for
The last 0 bit ensures that similar key-IV pairs do not produce shifted versions of each other.
The NLFSR is initialised by copying the entire 128 bit key () into the NLFSR:
for
Start up clocking
Before the pre-output function can begin to output its pre-output stream it has to be clocked 256 times to warm up, during this stage the pre-output stream is fed into the feedback polynomials and .
Key stream
The key stream () and MAC functionality in Grain 128a both share the same pre-output stream (). As authentication is optional our key stream definition depends upon the .
When authentication is enabled, the MAC functionality uses the first bits (where is the tag size) after the start up clocking to initialise. The key stream is then assigned every other bit due to the shared pre-output stream.
If authentication is enabled:
If authentication is disabled:
MAC
Grain 128a supports tags of size up to 32 bit, to do this 2 registers of size is used, a shift register() and an accumulator(). To create a tag of a message where is the length of as we have to set to ensure that i.e. and has different tags, and also making it impossible to generate a tag that completely ignores the input from the shift register after initialisation.
For each bit in the accumulator we at time we denounce a bit in the accumulator as .
Initialisation
When authentication is enabled Grain 128a uses the first bits of the pre-output stream() to initialise the shift register and the accumulator. This is done by:
Shift register:
for
Accumulator:
for
Tag generation
Shift register:
The shift register is fed all the odd bits of the pre-output stream():
Accumulator:
for
Final tag
When the cipher has completed the L iterations the final tag() is the content of the accumulator:
for
References
External links
A New Version of Grain-128 with Authentication |
337026 | https://en.wikipedia.org/wiki/Jay%20Rockefeller | Jay Rockefeller | John Davison "Jay" Rockefeller IV (born June 18, 1937) is a retired American politician who served as a United States senator from West Virginia (1985–2015). He was first elected to the Senate in 1984, while in office as governor of West Virginia (1977–85). Rockefeller moved to Emmons, West Virginia, to serve as a VISTA worker in 1964 and was first elected to public office as a member of the West Virginia House of Delegates (1966-1968). Rockefeller was later elected secretary of state of West Virginia (1968–1973) and was president of West Virginia Wesleyan College (1973–1975). He became the state's senior U.S. senator when the long-serving Senator Robert Byrd died in June 2010.
Rockefeller is the great-grandson of oil tycoon John D. Rockefeller, who died less than a month before Jay's birth. He was the only serving politician of the Rockefeller family during his tenure in the United States Senate, and the only one to have held office as a Democrat, in what has been a traditionally Republican family (though he too was originally a Republican until he decided to run for office in the then-heavily Democratic state). Rockefeller did not seek reelection in 2014 and was succeeded by Republican U.S. Representative Shelley Moore Capito.
Early life and education
John Davison Rockefeller IV was born at New York Hospital in Manhattan to John D. Rockefeller III (1906–1978) and Blanchette Ferry Hooker (1909–1992), 26 days after the death of his patrilineal great-grandfather, John D. Rockefeller (1839–1937). He is a grandson of John D. Rockefeller Jr. Jay graduated from Phillips Exeter Academy in 1955. After his junior year at Harvard College, he spent three years studying Japanese at the International Christian University in Tokyo. He graduated from Harvard in 1961 with a Bachelor of Arts degree in Far Eastern languages and history. He attended Yale University and did graduate work in Oriental studies and studied the Chinese language.
After college, Rockefeller worked for the Peace Corps in Washington, D.C., under President John F. Kennedy, where he developed a friendship with Attorney General Robert F. Kennedy and worked as an assistant to Peace Corps Director Sargent Shriver. He served as the operations director for the Corps' largest overseas program, in the Philippines. He worked for a brief time in the Bureau of East Asian and Pacific Affairs. He continued his public service in 1964–1965 in the Volunteers in Service to America (VISTA), under President Lyndon B. Johnson, during which time he moved to Emmons, West Virginia.
Career
State politics
Rockefeller was elected to the West Virginia House of Delegates in 1966, and to the office of West Virginia Secretary of State in 1968. He won the Democratic nomination for governor in 1972, but was defeated in the general election by the Republican incumbent Governor Arch A. Moore Jr., Rockefeller then served as president of West Virginia Wesleyan College from 1973 to 1975.
Rockefeller was elected governor of West Virginia in 1976 and re-elected in 1980. He served as governor when manufacturing plants and coal mines were closing as the national recession of the early 1980s hit West Virginia particularly hard. Between 1982 and 1984, West Virginia's unemployment rate hovered between 15 and 20 percent.
U.S. Senate
Elections
In 1984, he was elected to the United States Senate, narrowly defeating businessman John Raese as Ronald Reagan easily carried the state in the presidential election. As in his 1980 gubernatorial campaign against Arch Moore, Rockefeller spent over $12 million to win a Senate seat. Rockefeller was re-elected in 1990, 1996, 2002 and 2008 by substantial margins. He was chair of the Committee on Veterans' Affairs (1993–1995; January 3 to 20, 2001; and June 6, 2001 – January 3, 2003). Rockefeller was the chair of the Committee on Commerce, Science, and Transportation (2009–2015).
Overview
In April 1992, he was the Democratic Party's finance chairman and considered running for the presidency, but pulled out after consulting with friends and advisers. He went on to strongly endorse Clinton as the Democratic candidate.
He was the Chairman of the prominent Senate Intelligence Committee (retiring in January 2009), from which he commented frequently on the war in Iraq.
In 1993, Rockefeller became the principal Senate supporter, with Ted Kennedy, behind Bill and Hillary Clinton's sweeping health care reform package, liaising closely with the First Lady, opening up his mansion next to Rock Creek Park for its first strategy meeting. The reform was subsequently defeated by an alliance between the Business Roundtable and a small-business coalition.
In 2002, Rockefeller made an official visit to several Middle Eastern countries, during which he discussed his personal views regarding United States military intentions with the leaders of those countries. In October of that year, Rockefeller strongly expressed his concern for Saddam Hussein's alleged weapons of mass destruction program while addressing the U.S. Senate:
In November 2005 during a TV interview, Rockefeller stated,
I took a trip ... in January 2002 to Saudi Arabia, Jordan and Syria, and I told each of the heads of state that it was my view that George Bush had already made up his mind to go to war against Iraq, that that was a predetermined set course that had taken shape shortly after 9/11.
Rockefeller noted that the comment expresses his personal opinion, and that he was not privy to any confidential information that such action was planned. On October 11, 2002, he was one of 77 Senators who voted for the Iraq Resolution authorizing the Iraq invasion.
In February 2010, regarding President Obama, Rockefeller said,
He says 'I'm for clean coal,' and then he says it in his speeches, but he doesn't say it in here ... And he doesn't say it in the minds of my own people. And he's beginning to not be believable to me.
Rockefeller faced criticism from West Virginia coal companies, which claimed that he was out of touch.
Rockefeller became the senior U.S. senator from West Virginia when Robert Byrd died in June 2010, after serving in the senate with Rockefeller for 25 years.
In July 2011 Rockefeller was prominent in calling for U.S. agencies to investigate whether alleged phone hacking at News Corporation's newspapers in the United Kingdom had targeted American victims of the September 11 attacks. Rockefeller and Barbara Boxer subsequently wrote to the oversight committee of Dow Jones & Company (a subsidiary of News Corporation) to request that it conduct an investigation into the hiring of former CEO Les Hinton, and whether any current or former executives had knowledge of or played a role in phone hacking.
He announced on January 11, 2013, that he would not run for a sixth term. On March 25, 2013, Rockefeller announced his support for gay marriage.
In November 2014, Rockefeller donated his senatorial archives to the West Virginia University Libraries and the West Virginia & Regional History Center. The archival collection documents his 30-year career in the United States Senate.
According to the website GovTrack, Rockefeller missed 541 of 9,992 roll call votes from January 1985 to July 2014. This amounted to 5.4 percent, which was worse than the median of 2.0 percent among senators serving as of July 2014.
Rockefeller, along with his son Charles, is a trustee of New York's Asia Society, which was established by his father in 1956. He is also a member of the Council on Foreign Relations, a nonprofit think tank previously chaired by his uncle, David Rockefeller. As a senator, he voted against the 1993 North American Free Trade Agreement, which was heavily backed by David Rockefeller.
Committees
Rockefeller served on the following committees in the 112th Congress:
Committee on Commerce, Science, and Transportation (Chairman)
As chair of the full committee, Sen. Rockefeller may serve as an ex officio member of all subcommittees
Committee on Finance
Subcommittee on Health Care (Chairman)
Subcommittee on International Trade, Customs, and Global Competitiveness
Subcommittee on Social Security, Pensions, and Family Policy
Select Committee on Intelligence
Committee on Veterans' Affairs
Joint Committee on Taxation
Political positions
Iraq War
Rockefeller initially supported the use of force based upon the evidence presented by the intelligence community that linked Iraq to nuclear ambitions. After the Niger uranium forgeries, in which the Bush administration gave forged documents to U.N. weapons inspectors to support allegations against Iraq, Rockefeller started an investigation into the falsification and exaggeration of evidence for the war. Through the investigations, he became an outspoken critic of Bush and the Iraq war. As chair of the Intelligence committee, he presided over a critical report on the Administration's handling of intelligence and war operations.
Rockefeller and the Senate Select Committee on Intelligence released the final two pieces of the Phase II report on Iraq war intelligence on June 5, 2008. Rockefeller said, "The president and his advisers undertook a relentless public campaign in the aftermath of the attacks to use the war against Al Qaeda as a justification for overthrowing Saddam Hussein."
Television violence
In July 2007, Rockefeller announced that he planned to introduce legislation before the August Congressional recess that would give the FCC the power to regulate TV violence. According to the edition of July 16, 2007, of Broadcasting & Cable, the new law would apply to both broadcast as well as cable and satellite programming. This would mark the first time that the FCC would be given power to regulate such a vast spectrum of content, which would include almost everything except material produced strictly for direct internet use. An aide to the senator said that his staff had also been carefully formulating the bill in such a way that it would be able to pass constitutional scrutiny by the courts.
Telecommunications companies
In 2007, Rockefeller began steering the Senate Intelligence Committee to grant retroactive immunity to telecommunications companies who were accused of unlawfully assisting the National Security Agency (NSA) in monitoring the communications of American citizens.
This was an about-face of sorts for Senator Rockefeller, who had hand-written a letter to Vice President Dick Cheney in 2003 expressing his concerns about the legality of NSA's warrantless wire-tapping program. Some have attributed this change of heart to the spike in contributions from telecommunications companies to the senator just as these companies began lobbying Congress to protect them from lawsuits regarding their cooperation with the National Security Agency.
Between 2001 and the start of this lobbying effort, AT&T employees had contributed only $300 to the senator. After the lobbying effort began, AT&T employees and executives donated $19,350 in three months. The senator has pledged not to rely on his vast fortune to fund his campaigns, and the AT&T contributions represent about 2% of the money he raised during the previous year.
Torture
Although publicly deploring torture, Rockefeller was one of two Congressional Democrats briefed on waterboarding and other secret CIA practices in the early years of the Bush Administration, as well as the existence of taped evidence of such interrogations (later destroyed). In December 2007, Rockefeller opposed a special counsel or commission inquiry into the destruction of the tapes, stating "it is the job of the intelligence committees to do that."
On September 28, 2006, Rockefeller voted with a largely Republican majority to suspend habeas corpus provisions for anyone deemed by the Executive Branch an "unlawful combatant," barring them from challenging their detentions in court. Rockefeller's vote gave a retroactive, nine-year immunity to U.S. officials who authorized, ordered, or committed acts of torture and abuse, permitting the use of statements obtained through torture to be used in military tribunals so long as the abuse took place by December 30, 2005.
Rockefeller's vote authorized the President to establish permissible interrogation techniques and to "interpret the meaning and application" of international Geneva Convention standards, so long as the coercion fell short of "serious" bodily or psychological injury. The bill became law on October 17, 2006.
2008 presidential election
On February 29, 2008, he endorsed Barack Obama for president of the United States, citing Obama's judgment on the Iraq war and national security issues, and calling him the right candidate to lead America during a time of instability at home and abroad. This endorsement stood in stark contrast to the results of the state primary that was easily won by Hillary Clinton.
On April 7, 2008, in an interview for The Charleston Gazette, Rockefeller criticized John McCain's Vietnam experience:
McCain was a fighter pilot, who dropped laser-guided missiles from 35,000 feet. He was long gone when they hit. What happened when they get to the ground? He doesn't know. You have to care about the lives of people. McCain never gets into those issues."Rockefeller Apologizes for McCain Remark" FoxNews.com (AP) 2008-04-08. Retrieved 2010-11-22.
The McCain campaign called for an apology from Senator Rockefeller and for Barack Obama, whom Rockefeller endorsed, to denounce the comment. Rockefeller later apologized for the comment and the Obama campaign issued a statement expressing Obama's disagreement with the comment. Senator Lindsey Graham (R) of South Carolina noted that "John didn't drop bombs from 35,000 feet. ... the bombs were not laser guided (in the 1960 and 1970s)".
Cybersecurity
On April 1, 2009, Rockefeller introduced the Cybersecurity Act of 2009 - S.773 before Congress. Citing the vulnerability of the Internet to cyber-attacks, the bill makes provisions to turn the Department of Commerce into a public-private clearing house to share potential threat information with the owners of large private networks. It authorizes the Secretary of Commerce to sequester any information deemed necessary, without regard to any law.
It would also authorizes the president to declare an undefined "cyber-emergency" which would allow them to shut down any and all traffic to what they considers to be a compromised server.
On June 1, 2011, Rockefeller sponsored the fourth West Virginia Homeland Security Summit and Expo. The event ran two days and focused on homeland security with Rockefeller emphasizing cybersecurity.
Health care
In 1997, Rockefeller co-authored the Children's Health Insurance Program (CHIP) – a program aimed at giving low-income children health insurance coverage. Annually, CHIP has been successfully covering about 6 million children, who otherwise would have been uninsured. On September 30, 2007, the program expired, requiring Congress to reauthorize the legislation. On August 2, 2007, the vote for reauthorization passed legislation by a strong, bipartisan vote (68-31).
Recognizing the importance of long-term care for the nation's veterans, Rockefeller authored successful legislation that required the Department of Veterans Affairs, for the first time, to provide a wide range of extended care services—such as home health care, adult day care, respite care, and hospice care—to veterans who use the VA health care system.
Rockefeller is also a strong supporter of the fight against Alzheimer's and neurological disease. The Blanchette Rockefeller Neurosciences Institute (BRNI) was founded in Morgantown in 1999 by Rockefeller and his family to help advance medical and scientific understanding of Alzheimer's and other diseases of the brain. BRNI is the world's only non-profit institute dedicated exclusively to the study of both human memory and diseases of memory. Its primary mission is to accelerate neurological discoveries from the lab, including diagnostic tools and treatments, to the clinic to benefit patients who suffer from neurological and psychiatric diseases. A $30 million state-of-the-art BRNI research facility was opened at West Virginia University in Fall 2008. The approximately three-level building will house 100 scientists by 2012.
On Healthcare Reform, Rockefeller has been a proponent of a public option, fighting with some Democrats on the finance committee, in particular Max Baucus, the chairman of the committee, who contended that there was not enough support for a public option to gather the 60 votes needed to prevent a filibuster. Baucus asked repeatedly for Rockefeller to stop speaking on the issue.
On September 29, 2009 Rockefeller offered an amendment to the Baucus Health Bill in the Senate Finance Committee to add a public option. The amendment was rejected 15 to 8, with five Democrats (Baucus, Kent Conrad, Blanche Lincoln, Tom Carper, Bill Nelson) and all Republicans voting no.
Rockefeller supported President Barack Obama's health reform legislation; he voted for the Patient Protection and Affordable Care Act in December 2009, and he voted for the Health Care and Education Reconciliation Act of 2010.
Electoral history
Personal life
Since 1967, Rockefeller has been married to the former Sharon Lee Percy, the chief executive officer of WETA-TV, the leading PBS station in the Washington, D.C., area, which broadcasts such programs as PBS NewsHour and Washington Week. She is a twin daughter of Senator Charles Harting Percy (1919—2011) and Jeanne Valerie Dickerson.
Jay and Sharon have four children:
John Davison "Jamie" Rockefeller V (born 1969)
Valerie Rockefeller
Charles Rockefeller
Justin Aldrich Rockefeller (born 1979)
John Davison "Jamie" Rockefeller V is married to Emily Rockefeller. She is the daughter of former National Football League (NFL) Commissioner Paul Tagliabue. They have two daughters, Laura Chandler Rockefeller (born c. 2000) and Sophia Percy Rockefeller (born c. 2002), and one son, John Davison Rockefeller VI (born c. 2007).
The Rockefellers reside in Northwest Washington, D.C., and maintain permanent residence in Charleston, West Virginia. They have a ranch in the Grand Teton National Park in Jackson Hole, Wyoming. President Bill Clinton, a friend of Rockefeller's, and the Clinton family vacationed at the ranch in August 1995.
Rockefeller is related to several Republican Party supporters and former officeholders: his paternal grandmother Abigail Greene "Abby" Aldrich (1874–1948) was a daughter of Rhode Island Senator Nelson Wilmarth Aldrich (1841–1915). John Davison Rockefeller Jr. (1874–1960) and Abby's youngest son was banker David Rockefeller (1915–2017). David's brother Winthrop Rockefeller (1912–1973) served as Governor of Arkansas (1967–71). Winthrop and David's brother Nelson Aldrich Rockefeller (1908–1979) served as Governor of New York (1959–73) and as Vice President of the United States (1974–77) under President Gerald Ford. Jay is also a first cousin of Arkansas Lt. Governor Winthrop Paul Rockefeller (1948–2006).
Awards and decorations
National Intelligence Distinguished Public Service Medal, 2009
Grand Cordon Order of the Rising Sun (Japan), 2013.
National Consumers League first-ever Consumer and Labor Leadership Award (shared with Sen. Tom Harkin), commemorating their service to America's consumers and workers; Rockefeller also received the NCL Trumpeter award in 1992.
See also
Rockefeller family
David Rockefeller
Kykuit
US Senate Report on chemical weapons Rockefeller chaired this committee.
2005 CIA interrogation tapes destruction
References
Further reading
Jay Rockefeller: Old Money, New Politics, Richard Grimes, Parsons, West Virginia: McClain Printing Company, 1984.
The System: The American Way of Politics at the Breaking Point, Haynes Johnson and David S. Broder, Boston: Little Brown and Company, 1996. (Significant mention)
Senator
Governor
Biography at West Virginia Archives and History
Inaugural Address of John D. Rockefeller, IV (1977)
Inaugural Address of John D. Rockefeller, IV (1981)
Biography at the Peace Corps
External links
Articles
Senator Outlines Plans For Intelligence Panel Rockefeller's agenda on becoming chairman in January 2007.
Membership at the Council on Foreign Relations
|-
|-
|-
|-
|-
|-
|-
|-
|-
1937 births
American people of English descent
American people of German descent
American people of Scotch-Irish descent
American Presbyterians
Heads of universities and colleges in the United States
Democratic Party state governors of the United States
Democratic Party United States senators
Governors of West Virginia
Grand Cordons of the Order of the Rising Sun
Harvard College alumni
Living people
Members of the Council on Foreign Relations
Politicians from Charleston, West Virginia
Politicians from Manhattan
Phillips Exeter Academy alumni
Rockefeller family
Secretaries of State of West Virginia
United States senators from West Virginia
West Virginia Democrats
Winthrop family
20th-century American politicians
21st-century American politicians |
13951777 | https://en.wikipedia.org/wiki/Satellite%20navigation%20software | Satellite navigation software | Satellite navigation software or GPS navigation software usually falls into one of the following two categories:
Navigation with route calculation and directions from the software to the user of the route to take, based on a vector-based map, normally for motorised vehicles with some motorised forms added on as an afterthought.
Navigation tracking, often with a map "picture" in the background, but showing where you have been, and allowing "routes" to be preprogrammed, giving a line you can follow on the screen. This type can also be used for geocaching.
Terminology
Track
A track is a trace of somewhere that you have actually been (often called a "breadcrumb trail"). The GPS unit (external or internal) periodically sends details of the location which are recorded by the software, either by taking a reading based on a set time interval, based on a set distance, based on a change in direction by more than a certain angle, or a combination of these. Each point is stored together with its date and time. The resulting track can be displayed as a series of the recorded points or a line connecting them.
Retracing your steps is a simple matter of following the track back to the source.
Route
A route is a preset series of points that make up a set route to follow for your destination. Most software allows the route and the track to be displayed at the same time.
Waypoint
Waypoints are used to mark particular locations, typically used as markers along the "way" to somewhere. They are either key entered by users or downloaded from other sources, depending upon the sophistication of the device. Although not linked to tracks or routes, they can be used to simplify the construction of routes, by being able to be re-used. Frequently, waypoints serve a "safety" purpose, enabling a route to be taken around obstacles such as shallow water (marine navigation) or streams/cliffs/other hazards which may prevent a safe passage directly from point "A" to point "B".
Platforms
Software can be used on a laptop computer with an attached GPS receiver. Most commercial software runs on Windows, Mac OS X, and Linux.
Some software like Waze and Google Maps can also be used on mobile phone operating systems.
Software products
There are several navigation software products available. The primary distinction is whether it is designed for use on land or water.
Land-based navigation software
Commercial navigation software with embedded maps
DeLorme Street Atlas USA and Topo USA
HERE
Microsoft Streets and Trips 2009
Rand McNally
Navigon
Navman
Magellan
Mireo
iGO
ROUTE 66
TomTom Navigator
TomTom Mobile
TeleType WorldNavigator
TPL Maps
Waze
Commercial navigation software with scanned or downloaded maps and orthophotos stored in the computer (independent, stand alone system)
OziExplorer
GPSS
Free open source navigation software (independent stand alone system)
OsmAnd (Android) open source, and free
MoNav (Cross-platform) open source and free
Navit (Cross-platform) open source and free
Navigation software with maps downloaded from a remote server
Google Earth (Windows, Mac, Linux)
Google Maps (platform independent)
Navit (Cross-platform) open source and free
Marine navigation software
Navigation software for use on the water has many features in common with land-based GPS navigation software. It can use electronic navigation chart or raster charts, usually provides user ability to plan routes and set waypoints, and may have live GPS tracking capabilities. In addition, marine navigation software often has option to control external autopilot for automated boat navigation. It may incorporate GRIB weather overlay on the chart, Tide predictions and other related information services of additional use to mariners.
Free open source marine navigation software
OpenCPN (Cross-platform) open source and free
Aeronautical navigation software
This kind of software usually creates a modern glass cockpit and uses more than just a single GPS sensor to assist the navigation. Such sensors are Attitude and Heading Reference Systems (AHRS) and Inertial Measurement Unit (IMU) sensors.
See also
Comparison of free off-line GPS software
Comparison of commercial GPS software
Comparison of web map services
Geopositioning
GPS software-defined receiver
References
Transport software
Global Positioning System |
65008732 | https://en.wikipedia.org/wiki/Brooklyn%20Bridge%20%28software%29 | Brooklyn Bridge (software) | The Brooklyn Bridge from White Crane Systems was a data transfer enabler. Although it came with some hardware, it was the software which was the basis of the product. It also could transform the data's format.
Overview
The New York Times described its category as being among "communications packages used to transfer files." In an era of 300 baud, Brooklyn Bridge operated at "115,200 baud" so that a transfer which "at 300 baud took 4 minutes and 36 seconds" only needed
5 seconds. Unlike some communications packages, this one retains the original version-date, so as not to alarm people
when they seem to have what looks like an update, when it's not.
Description
Once the software is installed, users comfortable with typing the word "COPY" can do so as readily as they sneakernet. An earlier review described it as "less cumbersome than conventional communications software" The use of neither specialized hardware nor specialized software is ideal in an era when this can be done using online or other "outside" services.
See also
BLAST (protocol)
Kermit (protocol)
Zamzar
References
Communication software
Computer data
Data management
History of software
Software companies of the United States |
290832 | https://en.wikipedia.org/wiki/The%20Magic%20School%20Bus%20Lost%20in%20the%20Solar%20System | The Magic School Bus Lost in the Solar System | The Magic School Bus Lost in the Solar System is the fourth book in Joanna Cole and Bruce Degen's The Magic School Bus series. The book depicts arguably the most well-known adventure of the series and introduces the character of Janet.
Synopsis
Ms. Frizzle's class is learning about the Solar System and Arnold's unpleasant cousin Janet, who constantly raves about herself, has joined them. The Friz decides to take the kids on a field trip to the planetarium. But once they get there, they find the planetarium is closed for repairs. However, on the way back to school, Ms. Frizzle pushes a button that makes the bus transform into a rocket and blast off into outer space.
Once in outer space, the bus flies to Earth's Moon, where the kids make the most of the lesser gravity. Ms. Frizzle then takes them to the Sun and then Mercury, Venus and Mars before flying into the asteroid belt. However, while in the belt, one of the bus's tail lights is damaged by an asteroid and the Friz flies out to fix the tail light with a tether line connecting her to the bus. However, the bus's autopilot malfunctions, causing the bus to fly off, breaking Ms. Frizzle's tether line and leaving her stranded in the asteroid belt.
Janet looks through the Friz's things and finds Ms. Frizzle's lesson book, which documents the information she is supposed to tell the kids during the field trip (complete with "Arnold, are you listening?" written into it.) Janet reads through the book as they pass the outer planets and until they pass Pluto, leaving the solar system. Janet then flips through the book and finds the instructions for the autopilot, so they can fly back to the asteroid belt and rescue Ms. Frizzle.
After they rescue the Friz, they return to Earth. The children tell various adults about their trips, but unfortunately they don't believe them, thinking that instead it was all their imagination - perhaps some kind of game they played with their friends.
Reception
Michele Landsberg of Entertainment Weekly gave a very positive review of the book, saying that, "The fun is irresistible and the information substantial [...]".
Television adaptation
The book was adapted into the first episode of the Magic School Bus television series to be broadcast. It is likely not the first episode produced (i.e. the pilot episode) since Arnold at one point mentions that the class went on a field trip inside a rotten log, probably referring to the events of the episode The Magic School Bus Meets The Rot Squad. (He also mentions that they've gone to the bottom of the ocean, but they never did that until the 4th episode of the first season The Magic School Bus Gets Eaten and the second-season episode The Magic School Bus Blows Its Top.) The episode is infamous for its third act in which it unintentionally teaches that one could survive without a helmet in the vacuum of space if brought back into an oxygen-filled environment quickly.
For the most part, the episode remains faithful to the book. Most notably, in the episode, Janet's bragging about herself does not appear to be empty bragging and she constantly raves about how "proof" is needed for all extraordinary claims, prompting her to force Arnold to collect "proof" from every planet in the solar system, so she can prove to the students in her class that she actually traveled to all the planets. Also in this one Arnold surprisingly suggests going to outer space himself to prove to Janet the truth.
Also, instead of remaining in the asteroid belt, Ms. Frizzle uses her jet pack to fly off to another planet and provides the kids with clues as to her location via the radio on the bus. She, of course, turns out to be on Pluto. The ending is changed too, with all of Janet's possessions ("proof") falling out of the bus on Pluto and her refusing to leave without it. Janet refuses to go home without her possessions, while Arnold refuses to go home without her. Arnold (in protest against her desire for proof) demonstrates to Janet what would happen to her if she remains on Pluto by removing his space helmet. When he removes his helmet, it freezes him (despite Janet's pleads of Arnold not to do it) and forces Janet to leave Pluto immediately. As the kids get back on the bus, Janet (as she and the class are holding Arnold) says, "BACK TO EARTH!", and to Ms. Frizzle, she says, "AND STEP ON IT!". This ending, of course, makes Arnold the hero instead of Janet. Among arriving back to Earth, all Arnold got from the freeze-over was a cold, and then the alien-obsessed Ralphie fakes a news announcement about aliens discovered on Pluto, complaining about the junk Janet left.
Despite the producer segment indicating otherwise, it is not implausible that Arnold could survive exposure to the thin atmosphere of Pluto after removing his helmet for long enough to be rescued. The show does, however, repeat the urban legend that one could catch a cold from the cold. The common cold is a viral infectious disease and one particularly could not catch it in the virtually airless atmosphere of Pluto as Arnold does in the episode.
This episode also marks the introduction of Janet in the TV series, and it is also the first time the bus is driven by the kids or someone other than Ms. Frizzle (the bus does not appear to have autopilot in the episode).
Software adaptation
The Magic School Bus Explores the Solar System is the first software game developed based on the Magic School Bus series (The Magic School Bus Explores the Human Body was released the same year.) In the game, Ms. Frizzle is lost as soon as the bus flies off into space and got hit by a meteor in a meteor shower and the goal of the game is to locate her using the clues she provides. The Friz's hiding place varies throughout the game.
As would become the standard for the remainder of the games in the original software series, the main screen consists of the bus's dashboard, where the user can "drive" the bus to any of the nine planets or Earth's moon. Since Jupiter, Saturn, Uranus and Neptune lack solid surfaces, the Bus (unlike in the book and TV adaptations) in this version instead lands on one of their moons (Io for Jupiter, Mimas for Saturn, Miranda for Uranus and Triton for Neptune).
Once on a planet, the user can exit the bus, where a strange satellite called the "whatsit" must be clicked on to bring up an arcade-styled game in which the user, controlling one of the students, must collect one of Ms. Frizzle's tokens (giant coins with Ms. Frizzle's face on them) which provide the user with a clue as to the Friz's whereabouts and activate the "Friz-finder." There are only three clues available, but tokens will still need to be collected to activate the "Friz-finder" after all the clues have been exhausted. Clicking on the "Friz-finder" will determine whether Ms. Frizzle is anywhere on the current planet. If she is, then the game is completed, but if she is not then the user will have to collect another token to activate the "Friz-finder" to try again.
Notes
In three of the Magic School Bus games, Liz has the ability to speak (only the player can hear her).
Most of the voices in the software sound similar to those in the TV series, except for Ms. Frizzle. Lily Tomlin did not reprise the role in the game, being replaced by Tina Marie Goff.
When Keesha says "It's really cloudy down here", her voice is that of Phoebe's. Similar goofs happened in other software; Phoebe speaks in Keesha's voice showing her report about the Small Intestines, Dorothy Ann has Keesha and Phoebe's voices and not her own while exploring the Large Intestines, and Ralphie speaks in Tim's voice saying how ugly garbage is on the beach.
Notes
References
Lost In The Solar System
1990 children's books
1994 video games
Microsoft games
Windows games
Classic Mac OS games
Software for children
Children's television series episodes |
21362023 | https://en.wikipedia.org/wiki/List%20of%20numerical%20libraries | List of numerical libraries | This is a list of numerical libraries, which are libraries used in software development for performing numerical calculations. It is not a complete listing but is instead a list of numerical libraries with articles on Wikipedia, with few exceptions.
The choice of a typical library depends on a diverse range of requirements such as: desired features (e.g.: large dimensional linear algebra, parallel computation, partial differential equations), commercial/opensource nature, readability of API, portability or platform/compiler dependence (for e.g.: Linux, Windows, Visual C++, GCC), performance in speed, ease-of-use, continued support from developers, standard compliance, specialized optimization in code for specific application scenarios or even the size of the code-base to be installed.
As we find comprehensive surveys rarely available, there is almost always (at least initially) a difficult choice among a number of possible libraries.
Often it tends to be at the discretion of the user based on his own taste and comforts, only due to the lack of proper information.
Multi-language
C
C++
Delphi
ALGLIB - an open source numerical analysis library.
.NET Framework languages C#, F#, VB.NET and PowerShell
Fortran
Java
Perl
Perl Data Language gives standard Perl the ability to compactly store and speedily manipulate the large N-dimensional data arrays. It can perform complex and matrix maths, and has interfaces for the GNU Scientific Library, LINPACK, PROJ (as of July 2021, only version 4), and plotting with PGPLOT. There are libraries on CPAN adding support for the linear algebra library LAPACK, the Fourier transform library FFTW, and plotting with gnuplot, and PLplot.
Python
Others
XNUMBERS — Multi Precision Floating Point Computing and Numerical Methods for Microsoft Excel.
INTLAB — Interval arithmetic library for MATLAB.
See also
Comparison of computer algebra systems
Comparison of numerical-analysis software
List of graphing software
List of numerical-analysis software
List of optimization software
List of statistical packages
References
External links
The Math Forum - Math Libraries, an extensive list of mathematical libraries with short descriptions
Numerical analysis software
Numerical analysis software
Software
ja:数値解析ソフトウェア |
10319588 | https://en.wikipedia.org/wiki/Agile%20testing | Agile testing | Agile testing is a software testing practice that follows the principles of agile software development. Agile testing involves all members of a cross-functional agile team, with special expertise contributed by testers, to ensure delivering the business value desired by the customer at frequent intervals, working at a sustainable pace. Specification by example is used to capture examples of desired and undesired behavior and guide coding.
Overview
Agile development recognizes that testing is not a separate phase, but an integral part of software development, along with coding. Agile teams use a "whole-team" approach to "baking quality in" to the software product. Testers on agile teams lend their expertise in eliciting examples of desired behavior from customers, collaborating with the development team to turn those into executable specifications that guide coding. Testing and coding are done incrementally and interactively, building up each feature until it provides enough value to release to production. Agile testing covers all types of testing. The Agile Testing Quadrants provide a helpful taxonomy to help teams identify and plan the testing needed.
The model of the Agile Testing Quadrants was originally described by Brian Marick, and was popularized by Lisa Crispin and Janet Gregory in their book Agile Testing: A Practical Guide for Testers and Agile Teams. It places different test types on two axis: Technology Facing vs Business Facing, and Support Programming vs Critique Product.
Traditional testing methodologies (often employed in the Waterfall model of software development) usually involve a two-team, two-phase process in which the development team builds the product to as near perfection as possible. The software product is delivered late in the software development life cycle at which point the test team strives to find as many bugs/errors as possible. In contrast with these traditional methodologies, Agile testing focuses on repairing faults immediately, rather than waiting for the end of the project. When testing occurs at the tail end of a project, it can sometimes be sacrificed in terms of duration and quality to meet critical schedules and budget restrictions. Costs are expected to go down as the time between development and testing feedback decreases. With shorter feedback loops, bugs fixes and reworks require less time as developers spend much less time reengaging the code's context as they move on to new problems and projects.
In the "Worldwide Software Testing Practices Report 2015 - 2016", ISTQB found that the popularity of Agile methodologies are significantly increasing, which shows the need for Agile testing processes and techniques. They are providing an Agile Tester extension to their certification.
Tools
As companies grow, agile testing teams often rely on software testing tools to solve challenges that can ultimately speed-up the release of feedback making sure. Most teams look for collaboration features, automated or customized reporting and finding ways to avoid repeated efforts. Choosing the right tool will depend on the requirements of each team. Pairing up with other Agile Lifecycle Development Tools, Agile testing tools can deliver effective results by coexisting in integrated environments. Such is the case for Atlassian Marketplace and Microsoft Visual Studio.
Some test management tools support Agile testing by getting teams involved earlier in the SDLC to continuously build test scenarios as stories evolve. Teams often look for a solution that can deliver a combination of automated and manual testing.
Further reading
References
Software testing
Agile software development |
15209792 | https://en.wikipedia.org/wiki/Collusion%20Syndicate | Collusion Syndicate | The Collusion Syndicate, formerly the Collusion Group and sometimes spelled Collu5ion, C0llu5i0n or C011u5i0n, was a Computer Security and Internet Politics Special Interest Group (SIG) founded in 1995 and effectively disbanded around 2002.
Collusion Group
The Collusion Group was founded in 1995 by technologist Tex Mignog, aka the TexorcisT (sic) in Dallas, Texas before moving the headquarters to Austin, Texas in 1997. Founding members included individuals that all operated anonymously using hacker pseudonyms (called "handles" or "nyms") including the TexorcisT, Progress, Sfear (sic), Anormal, StripE (sic) and Elvirus. The membership of this organization grew to an estimated 30+ by 1999 and was not localized to its headquarters in Austin, Texas, with members in other states, countries and continents. The group made numerous open appearances at computer security events such as H.O.P.E. and DefCon and was often quoted by the media on computer related security, political and cultural issues.
The group was well known for its online publication, www.Collusion.org and also founded and financed other events such as the "irQconflict", the largest seasonal computer gaming tournament in the South-Central US.
The group was often interviewed with regard to Internet security issues by reporters for a variety of media outlets, some examples being KVUE News
,
the Austin American Statesman and Washington Post
,
and The New York Times
.
www.Collusion.org
The Collusion Syndicate began publishing articles written by its members and others that submitted articles in 1997 on their site, www.Collusion.org, their stated mission being to "Learn all that is Learnable".
This site won awards including a Best of Austin in 2000 by the Austin Chronicle where the site was described as "an edgy cabal of net-savvy punks and vinyl-scratching, video-gaming malcontents, laying it down in no uncertain terms with a lot of dark backgrounds and urban-toothed graphics and in-your-face-yo rants."
Collusion Syndicate research on SIPRNet has been referenced by the SANS Institute
.
Xchicago has published tips by Collusion's founder, TexorcisT, on Google Hacking.
The group's work and research is referenced in many books including
Steal This Computer Book 4.0: What They Won't Tell You about the Internet,
Mac OS X Maximum Security
and Anarchitexts: Voices from the Global Digital Resistance.
The group may have been tied to Assassination Politics as evidenced by declassified documents.
Notable Inventions and Actions
AnonyMailer
1995 - An application developed to point out security issues with the Simple Mail Transfer Protocol.
Port-A-LAN
1998 - The Port-A-LAN is described as a "LAN-in-a-Box" and designed to facilitate quick network deployments. With Cat 3 50-pin telco cable and break-out "harmonicas" to quickly deploy a 160 node network at a previously unwired location in less than one hour. (Developed prior to the advent of WiFi popularity.)
irQconflict
1998-2001 - The Collusion Syndicate hosted the irQconflict
,
the largest seasonal computer gaming tournament in the South-Central US. These events were different in that they were very large for LAN party standards (100-200 gamers) and included a rave like atmosphere with DJs, club lighting and projectors showing computer animation and machinima. They took place in various venues in Austin, Texas, utilized Port-A-LAN technology and, due to its size, required the re-engineering of the venues' electrical wiring. These events drew attendance from all over Texas and surrounding states.
The Collusion Group took the show on the road in 1999, taking the irQconflict to DefCon 7
and in 2000 was invited to do their thing in conjunction with SXSW Interactive and COnduit 2K electronic film festival
and was where some machinima films chose to debut
, during the gaming.
Virtual Sit-ins
1999 - The Collusion Syndicate promoted Virtual Sit-ins which are manual DDoS attacks created by hundreds of protesters attempting to overload the servers of the organization they are protesting by repeatedly requesting data, manually.
SecurityTraq credits this site as providing an early introduction to the concept of Hacktivism and
they are referenced in The Internet and Democracy, a paper by Roger Clarke Prepared for IPAA/NOIE and included in a NOIE publication in September 2004.
They explanation of Hactivism was published in the Hacktivist and credited in the Ashville Global Report as lately as 2007.
Electric Dog
2000 - The Electric Dog is a remote control wireless camera robot created to demonstrate the practicality of simple robotics in routine office technical repair.
See also
2600 The Hacker Quarterly
Phrack
Legion of Doom
Chaos Computer Club
Cult of the Dead Cow
l0pht
Crypto-anarchism
Culture jamming
E-democracy
Hacker culture
Hacker ethic
Internet activism
References
External links
Collusion E-zine
1995 establishments in Texas
Computing culture
Politics and technology |
43191172 | https://en.wikipedia.org/wiki/ProtonMail | ProtonMail | ProtonMail is an end-to-end encrypted email service founded in 2013 in Geneva, Switzerland, by scientists who spent time at the CERN research facility. ProtonMail uses client-side encryption to protect email content and user data before they are sent to ProtonMail servers, unlike other common email providers such as Gmail and Outlook.com. The service can be accessed through a webmail client, the Tor network, or dedicated iOS and Android apps.
ProtonMail is run by its parent company Proton Technologies AG, which is based in the Canton of Geneva. The company also operates ProtonVPN, a VPN service. ProtonMail received initial funding through a crowdfunding campaign. Although the default account setup is free, the service is sustained by optional paid services. Initially invitation-only, ProtonMail opened up to the public in March 2016. In 2017, ProtonMail had over users, and grew to over 5 million by September 2018, 20 million by the end of 2019, and over 50 million in 2020.
History
Development
On 16 May 2014, ProtonMail entered into public beta. It was met with enough response that after three days they needed to temporarily suspend beta signups to expand server capacity. Two months later, ProtonMail received from 10,576 donors through a crowdfunding campaign on Indiegogo, while aiming for . During the campaign, PayPal froze ProtonMail's PayPal account, thereby preventing the withdrawal of worth of donations. PayPal stated that the account was frozen due to doubts of the legality of encryption, statements that opponents said were unfounded. The restrictions were lifted the following day.
On 18 March 2015, ProtonMail received from Charles River Ventures and the Fondation Genevoise pour l'Innovation Technologique (Fongit). On 14 August 2015, ProtonMail released major version 2.0, which included a rewritten codebase for its web interface. On 17 March 2016, ProtonMail released major version 3.0, which saw the official launch of ProtonMail out of beta. With a new interface for the web client, version 3.0 also included the public launch of ProtonMail's iOS and Android beta applications.
On 19 January 2017, ProtonMail announced support through Tor, at the hidden service address protonirockerxow.onion. On 21 November 2017, ProtonMail introduced ProtonMail Contacts, a zero-access encryption contacts manager. ProtonMail Contacts also utilizes digital signatures to verify the integrity of contacts data. On 6 December 2017, ProtonMail launched ProtonMail Bridge, an application that provides end-to-end email encryption to any desktop client that supports IMAP and SMTP, such as Microsoft Outlook, Mozilla Thunderbird, and Apple Mail, for Windows and MacOS.
On 25 July 2018, ProtonMail introduced address verification and Pretty Good Privacy (PGP) support, making ProtonMail interoperable with other PGP clients. In December 2019, ProtonMail launched "ProtonCalendar", a fully encrypted calendar.
The source code for the back-end remains closed source. However, ProtonMail released the source code for the web interface under an open-source license. ProtonMail also open sourced their mobile clients for iOS and Android, as well the ProtonMail Bridge app. All of their source code can be found on GitHub.
In September 2020, it was known that Protonmail has joined the Coalition for App Fairness which aims to gain better conditions for the inclusion of their apps in app stores.
DDoS attacks
From 3 to 7 November 2015, ProtonMail was under several DDoS attacks that made the service largely unavailable to users. During the attacks, the company stated on Twitter that it was looking for a new data center in Switzerland, saying, "many are afraid due to the magnitude of the attack against us".
In July 2018, ProtonMail reported it was once more suffering from DDoS attacks. CEO Andy Yen claimed that the attackers had been paid by an unknown party to launch the attacks. In September 2018, one of the suspected ProtonMail attackers was arrested by British law enforcement and charged in connection with a series of other high-profile cyberattacks against schools and airlines.
Block in China
On 22 April 2020, ProtonMail confirmed on its ProtonVPN Twitter that ProtonMail and ProtonVPN are banned in China because it doesn't allow the government to spy.
Block in Belarus
On 15 November 2019, Proton confirmed that government of the Republic of Belarus had issued a block across the country of ProtonMail and ProtonVPN IP addresses. The block was no longer in place four days later. No explanation was given to ProtonMail for the block, nor for the block being lifted.
Block in Russia
On 29 January 2020, the Russian Federal Service for Supervision of Communications, Information Technology and Mass Media reported that it had implemented a complete block of ProtonMail services within the Russian Federation. As a reason for the block, it cited ProtonMail's refusal to give up information relating to accounts that allegedly sent out spam with terror threats. However, ProtonMail claimed that it did not receive any requests from Russian authorities regarding any such accounts. In response to the block, the ProtonMail Twitter account recommended legitimate users circumvent the block via VPNs or Tor.
In March 2020, the company announced that even though the Russia ban was not particularly successful and the service continues to be largely available in Russia without utilising a VPN, ProtonMail will be releasing new anti-censorship features in both ProtonMail and ProtonVPN desktop and mobile apps which will allow more block attempts to be automatically circumvented.
Compliance with Swiss court orders and IP Logging
On 5 September 2021, ProtonMail confirmed it was forced to hand over IP addresses of French activists charged with theft and destruction of property after receiving a legally binding Swiss court order. Since article 271 of the Swiss Criminal Code prohibits Swiss companies from giving data to foreign authorities, French authorities asked the Swiss government for assistance. A similar request for assistance was made by the US government to the Swiss government in an August 2021 case involving death threats made against well-known immunologist Anthony Fauci. In that case however, ProtonMail was only able to provide a date of account creation.
On 6 September 2021, ProtonMail clarified its privacy policy to state "If you are breaking Swiss law, ProtonMail can be legally compelled to log your IP address as part of a Swiss criminal investigation" later. For this reason, the company strongly suggests that users who need to hide their identity from the Swiss government use their Tor hidden service/onion site. The company also clarified in its official statement that it cannot be forced by law to compromise its encryption. According to ProtonMail's transparency report, it is legally obligated to follow Swiss court orders, and in 2020, ProtonMail received 3,572 orders from Swiss authorities and contested 750 of them.
Encryption
ProtonMail uses a combination of public-key cryptography and symmetric encryption protocols to offer end-to-end encryption. When a user creates a ProtonMail account, their browser generates a pair of public and private RSA keys:
The public key is used to encrypt the user's emails and other user data.
The private key capable of decrypting the user's data is symmetrically encrypted with the user's mailbox password.
This symmetrical encryption happens in the user's web browser using AES-256. Upon account registration, the user is asked to provide a login password for their account.
A lost login password can be recovered by sending an e-mail to ProtonMail Support. Two of the questions that are asked, in order for Support to provide renewed access to the account are:
Do you remember to which addresses you have sent your last messages?
Do you remember the email subjects from the last sent messages?
This implies that these data are readable by support agents and hence by data analysis services. They constitute meta-data, so that networks of communicating accounts along with subject headers can be charted.
ProtonMail also offers users an option to log in with a two-password mode which requires a login password and a mailbox password.
The login password is used for authentication.
The mailbox password encrypts the user's mailbox that contains received emails, contacts, and user information as well as a private encryption key.
Upon logging in, the user has to provide both passwords. This is to access the account and the encrypted mailbox and its private encryption key. The decryption takes place client-side either in a web browser or in one of the apps. The public key and the encrypted private key are both stored on ProtonMail servers. Thus ProtonMail stores decryption keys only in their encrypted form so ProtonMail developers are unable to retrieve user emails or reset user mailbox passwords. This system absolves ProtonMail from:
Storing either the unencrypted data or the mailbox password.
Divulging the contents of past emails but not future emails.
Decrypting the mailbox if requested or compelled by a court order.
ProtonMail exclusively supports HTTPS and uses TLS with ephemeral key exchange to encrypt all Internet traffic between users and ProtonMail servers. Their 4096-bit RSA SSL certificate is signed by QuoVadis Trustlink Schweiz AG and supports Extended Validation, Certificate Transparency, Public Key Pinning, and Strict Transport Security. Protonmail.com holds an "A+" rating from Qualys SSL Labs.
In September 2015, ProtonMail added native support to their web interface and mobile app for PGP. This allows a user to export their ProtonMail PGP-encoded public key to others outside of ProtonMail, enabling them to use the key for email encryption. The ProtonMail team plans to support PGP encryption from ProtonMail to outside users.
A drawback of keeping the mail bodies encrypted is that the ProtonMail servers cannot search within them while they can search for their metadata.
The problems gets worse as the mail archives get larger and users have difficulties narrowing down their search targets.
A workaround is using ProtonMail Bridge to download and decrypt the messages and search for them locally.
Email sending
An email message sent from one ProtonMail account to another is automatically encrypted with the public key of the recipient. Once encrypted, only the private key of the recipient can decrypt the message. When the recipient logs in, their mailbox password decrypts their private key and unlocks their inbox.
Email messages sent from ProtonMail to non-ProtonMail email addresses may optionally be sent in plain text or with end-to-end encryption. With encryption, the message is encrypted with AES under a user-supplied password. The recipient receives a link to the ProtonMail website on which they can enter the password and read the decrypted message. ProtonMail assumes that the sender and the recipient have exchanged this password through a backchannel. Such email messages can be set to self-destruct after a period of time.
Location and security
Both ProtonMail and ProtonVPN are located in Switzerland to avoid any surveillance or information requests from countries under the Fourteen Eyes, and/or under government surveillance laws like the United States' Patriot Act or outside the bounds of law.
The company claims that it is also located in Switzerland because of its strict privacy laws.
In 2018 Nadim Kobeissi published an article arguing that as ProtonMail was generally accessed through a web client, "no end-to-end encryption guarantees have ever been provided by the ProtonMail service."
In 2020–2021, climate activists were arrested in France, after ProtonMail recorded and transmitted IP addresses to the authorities (upon request from French Police via Europol to the Swiss Federal Department of Justice and Police).
Data portability
ProtonMail limits data portability by locking support for external email client software through IMAP and POP3 protocols behind a paywall. As of 2021, users are unable to back up their email account locally without paying.
Data centres
ProtonMail maintains and owns its own server hardware and network in order to avoid utilizing a third party. It maintains two data centres, one in Lausanne and another in Attinghausen (in the former K7 military bunker under of granite) as a backup. Since the servers are located in Switzerland, they are legally outside of the jurisdiction of the European Union, United States, and other countries. Under Swiss law, all surveillance requests from foreign countries must go through a Swiss court and are subject to international treaties. Prospective surveillance targets are promptly notified and can appeal the request in court.
Each data centre uses load balancing across web, mail, and SQL servers, redundant power supply, hard drives with full disk encryption, and exclusive use of Linux and other open-source software. In December 2014, ProtonMail joined the RIPE NCC in an effort to have more direct control over the surrounding Internet infrastructure.
Two-factor authentication
ProtonMail currently supports two-factor authentication with TOTP tokens for its login process. As of October 2019, according to the official ProtonMail blog, U2F support for YubiKey and FIDO physical security keys is currently under development and will be available soon after the release of v4.0.
Account types
, ProtonMail offers the following plans:
See also
Comparison of mail servers
Comparison of webmail providers
References
External links
Free software webmail
Internet properties established in 2013
Cross-platform software
Software using the MIT license
Software using the GPL license
Free security software
Cryptographic software
Secure communication
Internet privacy software
Swiss brands
Tor onion services |
2030607 | https://en.wikipedia.org/wiki/Google%20data%20centers | Google data centers | Google data centers are the large data center facilities Google uses to provide their services, which combine large drives, computer nodes organized in aisles of racks, internal and external networking, environmental controls (mainly cooling and humidification control), and operations software (especially as concerns load balancing and fault tolerance).
There is no official data on how many servers are in Google data centers, but Gartner estimated in a July 2016 report that Google at the time had 2.5 million servers. This number is changing as the company expands capacity and refreshes its hardware.
Locations
The locations of Google's various data centers by continent are as follows:
North America
Berkeley County, South Carolina () — since 2007, expanded in 2013, 150 employees
Council Bluffs, Iowa () — announced 2007, first phase completed 2009, expanded 2013 and 2014, 130 employees
Douglas County, Georgia () — since 2003, 350 employees
Bridgeport, Alabama ()
Lenoir, North Carolina () — announced 2007, completed 2009, over 110 employees
Montgomery County, Tennessee () — announced 2015
Mayes County, Oklahoma at MidAmerica Industrial Park () — announced 2007, expanded 2012, over 400 employees
The Dalles, Oregon () — since 2006, 80 full-time employees
Reno, Nevada — announced in 2018 : 1,210 acres of land bought in 2017 in the Tahoe Reno Industrial Center; project approved by the state of Nevada in November 2018
Henderson, Nevada — announced in 2019; 64-acres; $1.2B building costs
Loudoun County, Virginia — announced in 2019
Northland, Kansas City — announced in 2019, under construction
Midlothian, Texas — announced in 2019, 375-acres; $600M building costs
New Albany, Ohio — announced in 2019; 400-acres; $600M building costs
Papillion, Nebraska — announced in 2019; 275-acres; $600M building costs
Salt Lake City, Utah — announced in 2020
South America
Quilicura, Chile () — announced 2012, online since 2015, up to 20 employees expected. A million investment plan to increase capacity at Quilicura was announced in 2018.
Cerrillos, Chile – announced for 2020
Colonia Nicolich, Uruguay – announced 2019
Europe
Saint-Ghislain, Belgium () — announced 2007, completed 2010, 12 employees
Hamina, Finland () — announced 2009, first phase completed 2011, expanded 2012, 90 employees
Dublin, Ireland () — announced 2011, completed 2012, 150 employees
Eemshaven, Netherlands () — announced 2014, completed 2016, 200 employees, €500 million expansion announced in 2018
Hollands Kroon (Agriport), Netherlands – announced 2019
Fredericia, Denmark ()— announced 2018, €600M building costs, completed in 2020 November
Zürich, Switzerland – announced in 2018, completed 2019
Warsaw, Poland – announced in 2019, completed in 2021
Asia
Jurong West, Singapore () — announced 2011, completed 2013
Changhua County, Taiwan () — announced 2011, completed 2013, 60 employees
Mumbai, India — announced 2017, completed 2019
Tainan City, Taiwan — announced September 2019
Yunlin County, Taiwan — announced September 2020
Jakarta, Indonesia — announced in 2020
New Delhi, India — announced in 2020, completed in July 2021
Hardware
Original hardware
The original hardware (circa 1998) that was used by Google when it was located at Stanford University included:
Sun Microsystems Ultra II with dual 200 MHz processors, and 256 MB of RAM. This was the main machine for the original Backrub system.
2 × 300 MHz dual Pentium II servers donated by Intel, they included 512 MB of RAM and 10 × 9 GB hard drives between the two. It was on these that the main search ran.
F50 IBM RS/6000 donated by IBM, included 4 processors, 512 MB of memory and 8 × 9 GB hard disk drives.
Two additional boxes included 3 × 9 GB hard drives and 6 x 4 GB hard disk drives respectively (the original storage for Backrub). These were attached to the Sun Ultra II.
SDD disk expansion box with another 8 × 9 GB hard disk drives donated by IBM.
Homemade disk box which contained 10 × 9 GB SCSI hard disk drives.
Production hardware
As of 2014, Google has used a heavily customized version of Debian (Linux). They migrated from a Red Hat-based system incrementally in 2013.
The customization goal is to purchase CPU generations that offer the best performance per dollar, not absolute performance. How this is measured is unclear, but it is likely to incorporate running costs of the entire server, and CPU power consumption could be a significant factor. Servers as of 2009–2010 consisted of custom-made open-top systems containing two processors (each with several cores), a considerable amount of RAM spread over 8 DIMM slots housing double-height DIMMs, and at least two SATA hard disk drives connected through a non-standard ATX-sized power supply unit. The servers were open top so more servers could fit into a rack. According to CNET and a book by John Hennessy, each server had a novel 12-volt battery to reduce costs and improve power efficiency.
According to Google, their global data center operation electrical power ranges between 500 and 681 megawatts.
The combined processing power of these servers might have reached from 20 to 100 petaflops in 2008.
Network topology
Details of the Google worldwide private networks are not publicly available, but Google publications make references to the "Atlas Top 10" report that ranks Google as the third largest ISP behind Level 3.
In order to run such a large network, with direct connections to as many ISPs as possible at the lowest possible cost, Google has a very open peering policy.
From this site, we can see that the Google network can be accessed from 67 public exchange points and 69 different locations across the world. As of May 2012, Google had 882 Gbit/s of public connectivity (not counting private peering agreements that Google has with the largest ISPs). This public network is used to distribute content to Google users as well as to crawl the internet to build its search indexes.
The private side of the network is a secret, but a recent disclosure from Google indicate that they use custom built high-radix switch-routers (with a capacity of 128 × 10 Gigabit Ethernet port) for the wide area network. Running no less than two routers per datacenter (for redundancy) we can conclude that the Google network scales in the terabit per second range (with two fully loaded routers the bi-sectional bandwidth amount to 1,280 Gbit/s).
These custom switch-routers are connected to DWDM devices to interconnect data centers and point of presences (PoP) via dark fiber.
From a datacenter view, the network starts at the rack level, where 19-inch racks are custom-made and contain 40 to 80 servers (20 to 40 1U servers on either side, while new servers are 2U rackmount systems. Each rack has an Ethernet switch). Servers are connected via a 1 Gbit/s Ethernet link to the top of rack switch (TOR). TOR switches are then connected to a gigabit cluster switch using multiple gigabit or ten gigabit uplinks. The cluster switches themselves are interconnected and form the datacenter interconnect fabric (most likely using a dragonfly design rather than a classic butterfly or flattened butterfly layout).
From an operation standpoint, when a client computer attempts to connect to Google, several DNS servers resolve www.google.com into multiple IP addresses via Round Robin policy. Furthermore, this acts as the first level of load balancing and directs the client to different Google clusters. A Google cluster has thousands of servers, and once the client has connected to the server additional load balancing is done to send the queries to the least loaded web server. This makes Google one of the largest and most complex content delivery networks.
Google has numerous data centers scattered around the world. At least 12 significant Google data center installations are located in the United States. The largest known centers are located in The Dalles, Oregon; Atlanta, Georgia; Reston, Virginia; Lenoir, North Carolina; and Moncks Corner, South Carolina. In Europe, the largest known centers are in Eemshaven and Groningen in the Netherlands and Mons, Belgium. Google's Oceania Data Center is located in Sydney, Australia.
Data center network topology
To support fault tolerance, increase the scale of data centers and accommodate low-radix switches, Google has adopted various modified Clos topologies in the past.
Project 02
One of the largest Google data centers is located in the town of The Dalles, Oregon, on the Columbia River, approximately 80 miles (129 km) from Portland. Codenamed "Project 02", the million complex was built in 2006 and is approximately the size of two American football fields, with cooling towers four stories high. The site was chosen to take advantage of inexpensive hydroelectric power, and to tap into the region's large surplus of fiber optic cable, a remnant of the dot-com boom. A blueprint of the site appeared in 2008.
Summa papermill
In February 2009, Stora Enso announced that they had sold the Summa paper mill in Hamina, Finland to Google for 40 million Euros. Google invested 200 million euros on the site to build a data center and announced additional 150 million euro investment in 2012. Google chose this location due to the availability and proximity of renewable energy sources.
Modular container data centers
In 2005, Google was researching a containerized modular data center. Google filed a patent application for this technology in 2003.
Floating data centers
In 2013, the press revealed the existence of Google's floating data centers along the coasts of the states of California (Treasure Island's Building 3) and Maine. The development project was maintained under tight secrecy. The data centers are 250 feet long, 72 feet wide, 16 feet deep. The patent for an in-ocean data center cooling technology was bought by Google in 2009 (along with a wave-powered ship-based data center patent in 2008). Shortly thereafter, Google declared that the two massive and secretly-built infrastructures were merely "interactive learning centers, [...] a space where people can learn about new technology."
Google halted work on the barges in late 2013 and began selling off the barges in 2014.
Software
Most of the software stack that Google uses on their servers was developed in-house. According to a well-known Google employee, C++, Java, Python and (more recently) Go are favored over other programming languages. For example, the back end of Gmail is written in Java and the back end of Google Search is written in C++. Google has acknowledged that Python has played an important role from the beginning, and that it continues to do so as the system grows and evolves.
The software that runs the Google infrastructure includes:
Google Web Server (GWS) custom Linux-based Web server that Google uses for its online services.
Storage systems:
Google File System and its successor, Colossus
Bigtable structured storage built upon GFS/Colossus
Spanner planet-scale database, supporting externally-consistent distributed transactions
Google F1 a distributed, quasi-SQL DBMS based on Spanner, substituting a custom version of MySQL.
Chubby lock service
MapReduce and Sawzall programming language
Indexing/search systems:
TeraGoogle Google's large search index (launched in early 2006)
Caffeine (Percolator) continuous indexing system (launched in 2010).
Hummingbird major search index update, including complex search and voice search.
Borg declarative process scheduling software
Google has developed several abstractions which it uses for storing most of its data:
Protocol Buffers "Google's lingua franca for data", a binary serialization format which is widely used within the company.
SSTable (Sorted Strings Table) a persistent, ordered, immutable map from keys to values, where both keys and values are arbitrary byte strings. It is also used as one of the building blocks of Bigtable.
RecordIO a sequence of variable sized records.
Software development practices
Most operations are read-only. When an update is required, queries are redirected to other servers, so as to simplify consistency issues. Queries are divided into sub-queries, where those sub-queries may be sent to different ducts in parallel, thus reducing the latency time.
To lessen the effects of unavoidable hardware failure, software is designed to be fault tolerant. Thus, when a system goes down, data is still available on other servers, which increases reliability.
Search infrastructure
Index
Like most search engines, Google indexes documents by building a data structure known as inverted index. Such an index obtains a list of documents by a query word. The index is very large due to the number of documents stored in the servers.
The index is partitioned by document IDs into many pieces called shards. Each shard is replicated onto multiple servers. Initially, the index was being served from hard disk drives, as is done in traditional information retrieval (IR) systems. Google dealt with the increasing query volume by increasing number of replicas of each shard and thus increasing number of servers. Soon they found that they had enough servers to keep a copy of the whole index in main memory (although with low replication or no replication at all), and in early 2001 Google switched to an in-memory index system. This switch "radically changed many design parameters" of their search system, and allowed for a significant increase in throughput and a large decrease in latency of queries.
In June 2010, Google rolled out a next-generation indexing and serving system called "Caffeine" which can continuously crawl and update the search index. Previously, Google updated its search index in batches using a series of MapReduce jobs. The index was separated into several layers, some of which were updated faster than the others, and the main layer wouldn't be updated for as long as two weeks. With Caffeine, the entire index is updated incrementally on a continuous basis. Later Google revealed a distributed data processing system called "Percolator" which is said to be the basis of Caffeine indexing system.
Server types
Google's server infrastructure is divided into several types, each assigned to a different purpose:
Web servers coordinate the execution of queries sent by users, then format the result into an HTML page. The execution consists of sending queries to index servers, merging the results, computing their rank, retrieving a summary for each hit (using the document server), asking for suggestions from the spelling servers, and finally getting a list of advertisements from the ad server.
Data-gathering servers are permanently dedicated to spidering the Web. Google's web crawler is known as GoogleBot. They update the index and document databases and apply Google's algorithms to assign ranks to pages.
Each index server contains a set of index shards. They return a list of document IDs ("docid"), such that documents corresponding to a certain docid contain the query word. These servers need less disk space, but suffer the greatest CPU workload.
Document servers store documents. Each document is stored on dozens of document servers. When performing a search, a document server returns a summary for the document based on query words. They can also fetch the complete document when asked. These servers need more disk space.
Ad servers manage advertisements offered by services like AdWords and AdSense.
Spelling servers make suggestions about the spelling of queries.
Security
In October 2013, The Washington Post reported that the U.S. National Security Agency intercepted communications between Google's data centers, as part of a program named MUSCULAR. This wiretapping was made possible because, at the time, Google did not encrypt data passed inside its own network. This was rectified when Google began encrypting data sent between data centers in 2013.
Environmental impact
Google's most efficient data center runs at using only fresh air cooling, requiring no electrically powered air conditioning.
In December 2016, Google announced that—starting in 2017—it would purchase enough renewable energy to match 100% of the energy usage of its data centers and offices. The commitment will make Google "the world's largest corporate buyer of renewable power, with commitments reaching 2.6 gigawatts (2,600 megawatts) of wind and solar energy".
References
Further reading
Shankland, Stephen, CNET news "Google uncloaks once-secret server." April 1, 2009.
External links
Google Research Publications
Web Search for a Planet: The Google Cluster Architecture (Luiz André Barroso, Jeffrey Dean, Urs Hölzle)
Google real estate
Data centers |
14264241 | https://en.wikipedia.org/wiki/Ceph%20%28software%29 | Ceph (software) | Ceph (pronounced ) is an open-source software-defined storage platform which implements object storage on a single distributed computer cluster and provides 3-in-1 interfaces for object-, block- and file-level storage. Ceph aims primarily for completely distributed operation without a single point of failure, scalability to the exabyte level, and to be freely available. Since version 12 Ceph does not rely on other filesystems and can directly manage HDDs and SSDs with its own storage backend BlueStore and can completely self reliantly expose a POSIX filesystem.
Ceph replicates data and makes it fault-tolerant, using commodity hardware, Ethernet IP and requiring no specific hardware support. The Ceph’s system offers disaster recovery and data redundancy through techniques such as replication, erasure coding, snapshots and storage cloning. As a result of its design, the system is both self-healing and self-managing, aiming to minimize administration time and other costs.
In this way, administrators have a single, consolidated system that avoids silos and collects the storage within a common management framework.
Ceph consolidates several storage use cases and improves resource utilization. It also lets an organization deploy servers where needed.
Design
Ceph employs five distinct kinds of daemons:
Cluster monitors () that keep track of active and failed cluster nodes, cluster configuration, and information about data placement and global cluster state.
Object storage devices () that use a direct, journaled disk storage (named BlueStore, which since the v12.x release replaces the FileStore which would use a filesystem)
Metadata servers () that cache and broker access to inodes and directories inside a CephFS filesystem.
HTTP gateways () that expose the object storage layer as an interface compatible with Amazon S3 or OpenStack Swift APIs
Managers () that perform cluster monitoring, bookkeeping, and maintenance tasks, and interface to external monitoring systems and management (e.g. balancer, dashboard, Prometheus, Zabbix plugin)
All of these are fully distributed, and may run on the same set of servers. Clients with different needs can directly interact with different subsets of them.
Ceph does striping of individual files across multiple nodes to achieve higher throughput, similar to how RAID0 stripes partitions across multiple hard drives. Adaptive load balancing is supported whereby frequently accessed objects are replicated over more nodes.
, BlueStore is the default and recommended storage type for production environments, which is Ceph's own storage implementation providing better latency and configurability than the filestore backend, and avoiding the shortcomings of the filesystem based storage involving additional processing and caching layers. The filestore backend is still considered useful and very stable; XFS used to be the recommended underlying filesystem type for production environments, while Btrfs was recommended for non-production environments. ext4 filesystems were not recommended because of resulting limitations on the maximum RADOS objects length. Even using BlueStore, XFS is used for a small partition of metadata.
Object storage S3
Ceph implements distributed object storage - BlueStore.
RADOS gateway (ceph-rgw) expose the object storage layer as an interface compatible with Amazon S3.
These are often capacitive disks which are associated with Ceph's S3 object storage for use cases: Big Data (datalake), Backup & Archives, IOT, media, video recording, etc.
Ceph's software libraries provide client applications with direct access to the reliable autonomic distributed object store (RADOS) object-based storage system, and also provide a foundation for some of Ceph's features, including RADOS Block Device (RBD), RADOS Gateway, and the Ceph File System. In this way, administrators can maintain their storage devices as a unified system, which makes it easier to replicate and protect the data.
The "librados" software libraries provide access in C, C++, Java, PHP, and Python. The RADOS Gateway also exposes the object store as a RESTful interface which can present as both native Amazon S3 and OpenStack Swift APIs.
Block storage
Ceph's object storage system allows users to mount Ceph as a thin-provisioned block device. When an application writes data to Ceph using a block device, Ceph automatically stripes and replicates the data across the cluster. Ceph's RADOS Block Device (RBD) also integrates with Kernel-based Virtual Machines (KVMs).
These are often fast disks (NVMe, SSD) which are associated with Ceph's block storage for use cases, including databases, virtual machines, data analytics, artificial intelligence, and machine learning.
"Ceph-RBD" interfaces with the same Ceph object storage system that provides the librados interface and the CephFS file system, and it stores block device images as objects. Since RBD is built on librados, RBD inherits librados's abilities, including read-only snapshots and revert to snapshot. By striping images across the cluster, Ceph improves read access performance for large block device images.
"Ceph-iSCSI" is a gateway which enables access to distributed, highly available block storage from any Microsoft Windows and VMWare vSphere server or client capable of speaking the iSCSI protocol. By using ceph-iscsi on one or more iSCSI gateway hosts, Ceph RBD images become available as Logical Units (LUs) associated with iSCSI targets, which can be accessed in an optionally load-balanced, highly available fashion.
Since all of ceph-iscsi configuration is stored in the Ceph RADOS object store, ceph-iscsi gateway hosts are inherently without persistent state and thus can be replaced, augmented, or reduced at will. As a result, Ceph Storage enables customers to run a truly distributed, highly-available, resilient, and self-healing enterprise storage technology on commodity hardware and an entirely open source platform.
The block device can be virtualized, providing block storage to virtual machines, in virtualization platforms such as Openshift, OpenStack, Kubernetes, OpenNebula, Ganeti, Apache CloudStack and Proxmox Virtual Environment.
File system storage
Ceph's file system (CephFS) runs on top of the same object storage system that provides object storage and block device interfaces. The Ceph metadata server cluster provides a service that maps the directories and file names of the file system to objects stored within RADOS clusters. The metadata server cluster can expand or contract, and it can rebalance the file system dynamically to distribute data evenly among cluster hosts. This ensures high performance and prevents heavy loads on specific hosts within the cluster.
Clients mount the POSIX-compatible file system using a Linux kernel client. An older FUSE-based client is also available. The servers run as regular Unix daemons.
Ceph's file storage is often associated with log collection, messaging, and file storage.
History
Ceph was initially created by Sage Weil for his doctoral dissertation, which was advised by Professor Scott A. Brandt at the Jack Baskin School of Engineering, University of California, Santa Cruz (UCSC), and sponsored by the Advanced Simulation and Computing Program (ASC), including Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and Lawrence Livermore National Laboratory (LLNL). The first line of code that ended up being part of Ceph was written by Sage Weil in 2004 while at a summer internship at LLNL, working on scalable filesystem metadata management (known today as Ceph's MDS). In 2005, as part of a summer project initiated by Scott A. Brandt and led by Carlos Maltzahn, Sage Weil created a fully functional file system prototype which adopted the name Ceph. Ceph made its debut with Sage Weil giving two presentations in November 2006, one at USENIX OSDI 2006 and another at SC'06.
After his graduation in autumn 2007, Weil continued to work on Ceph full-time, and the core development team expanded to include Yehuda Sadeh Weinraub and Gregory Farnum. On March 19, 2010, Linus Torvalds merged the Ceph client into Linux kernel version 2.6.34 which was released on May 16, 2010. In 2012, Weil created Inktank Storage for professional services and support for Ceph.
In April 2014, Red Hat purchased Inktank, bringing the majority of Ceph development in-house to make it a production version for enterprises with support (hotline) and continuous maintenance (new versions).
In October 2015, the Ceph Community Advisory Board was formed to assist the community in driving the direction of open source software-defined storage technology. The charter advisory board includes Ceph community members from global IT organizations that are committed to the Ceph project, including individuals from Red Hat, Intel, Canonical, CERN, Cisco, Fujitsu, SanDisk, and SUSE.
In November 2018, the Linux Foundation launched the Ceph Foundation as a successor to the Ceph Community Advisory Board. Founding members of the Ceph Foundation included Amihan, Canonical, China Mobile, DigitalOcean, Intel, OVH, ProphetStor Data Services, Red Hat, SoftIron, SUSE, Western Digital, XSKY Data Technology, and ZTE.
In March 2021, SUSE discontinued its Enterprise Storage product incorporating Ceph in favor of Longhorn. and the former Enterprise Storage website was updated stating "SUSE has refocused the storage efforts around serving our strategic SUSE Enterprise Storage Customers and are no longer actively selling SUSE Enterprise Storage."
Release history
Ceph releases
Etymology
The name "Ceph" is an abbreviation of "cephalopod", a class of molluscs that includes the octopus. The name (emphasized by the logo) suggests the highly parallel behavior of an octopus and was chosen to associate the file system with "Sammy", the banana slug mascot of UCSC. Both cephalopods and banana slugs are molluscs.
See also
BeeGFS
Distributed file system
Distributed parallel fault-tolerant file systems
Gfarm file system
GlusterFS
IBM General Parallel File System (GPFS)
Kubernetes
LizardFS
Lustre
MapR FS
Moose File System
OrangeFS
Parallel Virtual File System
Quantcast File System
RozoFS
Software-defined storage
XtreemFS
ZFS
Comparison of distributed file systems
References
Further reading
External links
UCSC Systems Research Lab
Storage Systems Research Center
Ceph Performance and Optimization, Ceph Day Frankfurt (2014) at Slideshare
Distributed file systems supported by the Linux kernel
Free software
Network file systems
Red Hat software
Userspace file systems
Virtualization-related software for Linux |
41557664 | https://en.wikipedia.org/wiki/Ian%20Bryant%20%28academic%29 | Ian Bryant (academic) | Ian Bryant (born 1965) is a British academic, engaged in promoting Trustworthy Software, and in Standardisation.
Current roles
Ian Bryant currently is best known for his roles in promoting Trustworthy Software (currently for the Trustworthy Software Foundation), but also has roles as:
Adjunct Faculty at University of Warwick Cyber Security Centre (CSC)
Adjunct Faculty at De Montfort University Cyber Security Centre (CSC)
Standards Coherence, predominantly with the British Standards Institution (BSI) and its international linked SDO (Standards Development Organisations):
BSI IST/033 – Expert Committee on Information Security (UK shadow for ISO/IEC JTC1 SC27), where he is Chair of Sub-committee IST/033/4 (Controls and Services; UK shadow for ISO/IEC JTC1 SC27 WG4)
BSI IST/015 – Expert Committee on Systems and Software Engineering (UK shadow for ISO/IEC JTC1 SC7)
BSI IST/038 – Expert Committee on Distributed Application Processes and Services (UK shadow for ISO/IEC JTC1 SC38)
BSI ICT/00-/09 - Chair of Project Committee for BS10754 series on Trustworthy Systems (replacing British Standards (BS) Publicly Available Specification (PAS) 754)
Early and personal life
Ian Bryant was educated at Taunton School in Somerset, and the University of Leicester where he studied Engineering.
Career
Ian Bryant has been a Professional Engineer employed by HM Government for much of his career, either as a technical specialist and/or project manager, with assignments spanning a variety of organisations, including Cabinet Office, MOD, National Archives, National Policing, and the former National Infrastructure Security Coordination Centre (now CPNI).
He has been involved with "Cyber Security" (and its various predecessor terms) since the 1980s, in a variety of roles including Investigation / Incident Response, Security Architecture, Systems Accreditation, Research and Technology Management, and Policy Development.
His work on Trustworthy Software originated with leading the original Cabinet Office (CSIA) study on Secure Software Development (SSD), then being the Technical Manager for the Pilot Operation of the CSIA (now CESG) Claims Tested Mark (CCT Mark) Scheme. Subsequently, he contributed to the Technology Strategy Board (TSB) Cyber Security Knowledge Transfer Network (CSKTN) Special Interest Group (SIG) on Secure Software Development, and latterly lead the Secure Software Development Partnership's (SSDP) SIG on Standards before the formalisation of the Software Security, Dependability and Resilience Initiative (SSDRI – the original name for TSI) in July 2011.
He also developed and launched the IT Security Awareness for Everyone (ITSafe) service—now part of GetSafeOnline and helped found the National Information Assurance Forum (NIAF – formerly "GIPSI") which he now Co-Chairs.
Recent research activity includes leading a NATO Research Task Group (RTG), and being the lead Information Security specialist for the European Commission (EC) funded MS3i and NEISAS Projects.
References
Living people
Academics of De Montfort University
Academics of the University of Warwick
British computer scientists
People educated at Taunton School
Alumni of the University of Leicester
1965 births |
33597222 | https://en.wikipedia.org/wiki/ICT%201900%20series | ICT 1900 series | ICT 1900 was a family of mainframe computers released by International Computers and Tabulators (ICT) and later International Computers Limited (ICL) during the 1960s and 1970s. The 1900 series was notable for being one of the few non-American competitors to the IBM System/360, enjoying significant success in the European and British Commonwealth markets.
Origins
In early 1963, ICT was engaged in negotiations to buy the computer business of Ferranti. In order to sweeten the deal, Ferranti demonstrated to ICT the Ferranti-Packard 6000 (FP6000) machine, which had been developed by its Canadian subsidiary Ferranti-Packard, to a design known as Harriac that had been initiated in Ferranti by Harry Johnson and fleshed out by Stanley Gill and John Iliffe.
The FP6000 was an advanced design, notably including hardware support for multiprogramming. ICT considered using the FP6000 as their medium-sized processor in the 1965–1968 timeframe, replacing the ICT 1302. Another plan being considered was to license a new range of machines being developed by RCA, probably compatible with the expected IBM 8000.
On 7 April 1964 IBM announced the System/360 series, a family of compatible machines spanning nearly the complete range of customer needs. It was immediately obvious that ICT would need a coherent response. Two paths were available: develop a range of machines based on the FP6000, using the flexibility of its design to produce smaller or larger machines, or cooperate with RCA who were re-targeting their development to a System/360 compatible range to be known as the RCA Spectra 70.
One major consideration was that the FP6000 was already running, while the RCA Spectra range would take some years to become available. In the end, the decision was made to go with a range of machines based on the FP6000. The centrepiece of the new range was the ICT 1904, a version of the FP6000 with the ICT standard peripheral interface. For higher-end machines, a new larger processor, the ICT 1906, was to be developed by the ICT West Gorton unit (formerly part of Ferranti). To meet the needs of smaller customers, smaller machines, the ICT 1901 and ICT 1902/3, were developed by the ICT Stevenage unit, based on the PF182 and PF183 processors already in development.
On 29 September 1964 the ICT 1900 range was announced in a filmed presentation, scripted by Antony Jay. The following week two working systems were demonstrated at the Business Equipment Exhibition, Olympia.
The first commercial sale was made in 1964 to the Morgan Crucible Company, comprising a 16K word 1902 with an 80-column 980-card/minute reader, a card punch, a 600 line/min printer and 4 x 20kchar/s tape drives. It was soon upgraded to a 32K word memory and a floating point unit to allow for some scientific work. The same company had also been the first to order ICT's first computer, the HEC4 (later ICT 1201), in 1955.
The first system delivered was a 1904, for the Northampton College of Advanced Technology, London in January 1965.
Architecture
The ICT 1900 was a word-addressing machine using a register-to-memory architecture with eight accumulator registers. Three of the accumulators could be used as modifier (index) registers. The word length was 24 bits, which could be used as four six-bit characters; instructions were provided for copying single characters to and from memory.
The accumulators were addressable as if they were the first eight words of memory, giving the effect of register-to-register instructions with no extra operation codes being needed. The hardware registers were an optional feature, and if not fitted the accumulators were the first eight words of memory. The large number of optional features in the FP6000 design gave ICT great flexibility in pricing.
A notable feature of the series was the hardware support for running multiple processes – every process ran in an independent address space, enforced by datum and limit registers. No user process could access the memory of any other process. Later models added paging hardware, allowing true virtual memory with the GEORGE 4 operating system.
On the original models the address size was 15 bits, allowing up to 32k words of memory. Later models added 22-bit addressing, allowing a theoretical 4Mword maximum memory. Instructions contained a 12-bit operand, either fixed or offset from an index register. Branch instructions held a 15-bit offset, allowing access to all memory on the initial range. When the address size was increased to 22 bits, replaced (indirect) and relative branches were added to the instruction set to allow access to the larger address space.
The largest change between the original FP6000 and the 1900 series was the inclusion of the ICT standard interface for connection of peripherals. This allowed connection of any ICT peripheral to any processor of the series, and owners could upgrade their processors while keeping the same peripherals or vice versa.
All I/O operations were initiated by a privileged supervisor process, known as the executive. User processes communicated with the executive using extracodes, instructions that caused a trap into the executive. The executive would then communicate with the appropriate peripheral via the Standard Interface, using functions not available to user processes. The subsequent data transfers would then occur across this interface, autonomously without further program involvement. The conclusion of the transfers (or error if any) would similarly be indicated back to the executive.
On smaller members of the series, some expensive instructions (floating point for example) were also implemented as extracodes. The combination of the executive and hardware provided the same interface to programs running on any model of the range.
The hardware floating-point unit, if fitted, ran autonomously. After a floating-point operation was started, integer instructions could be run in parallel until the result of the floating-point operation was needed.
Data formats
The instruction set supported the following data formats:
Character form
A 24-bit word could hold four six-bit characters.
Counter modifier, also known as an index word
A 9-bit counter and a 15-bit modifier (address) field. A loop instruction decremented the counter and incremented the address either by 1 or 2.
This format was only available in 15-bit addressing mode. In 22-bit mode the counter and address were kept in separate words.
Character counter modifier, also known as a character index word
Two-bit character offset, seven-bit counter and 15-bit modifier (word address). The BCHX (branch on character indexing) instruction decremented the counter and incremented the character offset, incrementing the word address if the character offset overflowed, branching if the count had not reached zero.
In 22-bit addressing mode the counter was unavailable, the format was a two-bit character offset and a 22-bit word address. The BCHX instruction incremented the character offset, incremented the word address if the character offset overflowed, and branched unconditionally.
Single-length integer
A 24-bit two's complement signed number.
Multi-length integer
The first word held a 24-bit two's complement signed number, subsequent words held 23-bit extensions with the high bit used for internal carry.
Single-length floating point number
Two words holding a 24-bit signed argument (mantissa) and a nine-bit exponent.
Double-length floating-point number
Two words holding a 38-bit signed argument and a nine-bit exponent.
Quadruple-length floating-point number
Four words holding a 75-bit signed argument and a nine-bit exponent.
Handled in software on all but 1906/7 processors with the extended floating-point feature.
Character set
Since the ICT 1900 used a six-bit character it was largely limited to a 64-character repertoire, with only upper case letters and no control characters.
In order to deal with data on paper tape or from communications equipment, a system of shifts could be used to represent the full 128 characters of ASCII.
Character #74 (octal 74) was considered an alpha shift and indicated subsequent characters were to be considered upper case, #75 was a beta shift and indicated subsequent characters were in lower case, #76 the delta shift, indicating the next character was a control character, and #77 was used as a fill (ignore) character. For example, the ASCII string "Hello World" would be encoded as "αHβELLO αWβORLD".
The 1900 used a variant of ASCII-63, known by ICT as the ECMA character set, with some characters in different positions:
Comparison with System/360
Both the 1900 series and IBM System/360 provided hardware support for multi-programming. On the 1900 all user memory addresses were modified by a datum (base address) register and checked against a limit register, preventing one program interfering with another. The System/360 gave each process and every 2048-byte block of memory a four-bit key, and if a process key did not match the memory block key an exception would result. The 1900 system required programs to occupy a contiguous area of memory but allowed processes to be relocated during execution, simplifying the work of the operating system. The 1900 also allowed any process direct access to the first 4096 words of its address space. (Both the 1900 and 360 had a 12-bit operand field, but on the 360 addresses were physical addresses so a program could directly access the first 4096 bytes of physical memory).
The System/360 had the advantage of a larger word and character size; its 32-bit words were large enough for (low accuracy) floating point numbers whereas the 1900 needed at least two words. The eight-bit byte of the System/360 allowed manipulation of lowercase characters without the complex shift sequences of the 1900. However, in the early days the smaller word size of the 1900 was seen as a cost advantage, as the memory could be 25% cheaper for the same number of words.
1900 range
Initial range
The initial range of machines was:
ICT 1901
A very small machine with a 6-bit wide mill (arithmetic unit). For compatibility with the other machines a 24-bit operation was performed by the processor as four 6-bit operations. Based on the PF183 developed by ICT Stevenage. The 1901 was announced and released after the other members of the initial range, in response to the IBM System/360 Model 20, and was a great success.
ICT 1902
A small machine. Based on the ICT Stevenage PF182 processor.
Like the 1901 the 1902 performed multiply and divide operations as extracodes. An optional commercial computing facility or CCF was available to add hardware multiply and divide. An optional floating point unit, the scientific computing facility, SCF was also available as a super-set of the CCF.
ICT 1903
The same processor as the 1902, but with 2µs core in place of the 6µs core supplied with the 1902.
ICT 1904
The ICT West Gorton processor derived from the FP6000 with the addition of the ICT standard interface.
ICT 1905
A 1904 with an autonomous hardware floating point unit.
ICT 1906
A new processor designed by ICT West Gorton with a 48-bit wide memory pathway and a 22-bit addressing mode. Delivered with up to 256Kwords of memory.
ICT 1907
A 1906 with a floating point unit.
ICT 1909
A machine similar to the 1905 but with a slow 6µs store comparable to the 1902. Designed for Universities who needed floating point but found the 1905 too expensive.
The execution time for an addition instruction ("add the contents of store location x to register y") ranged from 2.5 μs for a 1906 or 1907 with 1.1 μs core store, to 34 μs for a 1901 with 6 μs core store.
All machines except the 1901 were operated from a modified Teletype Model 33 ASR used to give commands to the executive. The 1901 was operated from console switches, with a console available as an optional extra.
A range of peripherals was available, including 80-column card punches and readers, 8 track paper tape punches and readers and solid barrel line printers. Data could be stored on half-inch magnetic tape. Magnetic disk storage became available in 1966.
The 1900 E/F series
In 1968 ICT introduced the E series machines:
ICT 1904E
Some improvements were made to the original 1904 and the new 22-bit addressing mode developed for the 1906 was made available.
ICT 1905E
The 1904E with a floating point unit.
ICT 1906E
The original 1906 had not been as fast as hoped, therefore the new top of the range machines were actually dual-processor versions of the 1904E.
ICT 1907E
A 1906E with a special higher performance floating point unit.
Improvements to the memory subsystems of these machines, replacing the 1.8µs core with 0.75µs core, were introduced as the F series.
(ICT merged with English Electric Computers to form ICL on 9 July 1968. Thus although the E series had been designed by ICT many, if not all, were delivered with ICL badges).
1900 A series
In 1969 the 1900 A series was delivered, replacing the remaining machines from the initial series and the E/F machines. The original discrete germanium semiconductor implementations were replaced by Texas Instruments 7400 series TTL integrated circuits in most of the range and Motorola MECL 10K ECL integrated circuits in the new 1906A (which was based on the original 1906 rather than the dual processor 1904 of the 1906E/F). There was a proposal to build a multiprocessor version of the 1906A, the 1908A (known internally as Project 51), which would allow ICL to compete with the large CDC and IBM machines in Universities and research centers but it was eventually abandoned in favor of accelerating work on the New Range which was being designed to replace both the 1900 series and the ICL System 4.
With the A series a hardware floating point unit was made an optional feature of all machines, instead of having a different model number for floating point equipped machines.
The 22-bit addressing mode and extended branch mode introduced by the 1906 was extended to the 1902A and 1903A, but not the much smaller 1901A.
ICL introduced a paging unit to the higher end machines (1904A, 1906A) and a new version of the GEORGE operating system, GEORGE 4 which was compatible with GEORGE 3 but used paged virtual memory in place of the simple base/limit system of the earlier machines.
ICL 1901A
Deliveries started in 1969.
ICL 1902A
Deliveries started in 1969.
ICL 1903A
Deliveries started in 1969.
ICL 1904A
First deliveries in 1970.
The 1904A had an optional paging unit and so could run GEORGE 4.
ICL 1906A
First deliveries in 1970.
The 1906A had a paging unit and so could run GEORGE 4.
The 1900 S series
In April 1971 ICL announced the S series of machines, replacing the core store of the earlier machines with semiconductor memory in most of the range and very fast Plessey nickel plated wire memory for the top of the range 1906S.
ICL 1901S
4µs semiconductor store
ICL 1902S
3µs semiconductor store
ICL 1903S
1.5µs semiconductor store
ICL 1904S
First delivery in 1972. New Schottky STTL logic used, giving a 30% performance increase. 500ns semiconductor store. Used by Brian Wyvill of System Simulation for the computer animation in Alien.
ICL 1906S
First delivery in 1973. Nickel plated wire memory with a 250ns cycle speed.
1900 T series
As the larger models of the new range were being introduced it was decided that the lower models of the 1900 range were becoming uncompetitive. To refresh the range new models were released. In each case the model was simply based on the next higher model of the previous range, the 1903T being based on the 1904S for example.
ICL 1901T
Delivery started in 1974. The 1901T was based on the 1902S with an integrated disk controller and VDU controller added to the processor cabinet to reduce space.
ICL 1902T
Delivery started in 1974. The 1902T was based on the 1903S with an integrated disk controller and integrated VDU controller.
ICL 1903T
Delivery started in 1973. As the 1903T was based on the 1904S it was available with a paging unit and could run George 4. The processor clock and memory cycle time were slower than the 1904S, allowing the use of cheaper parts. The 1903T was built at the ICL West Gorton site.
1900-compatible machines
During and after the production of the 1900 series a number of compatible (or clone) machines were produced by ICL licensees, as well as competitors.
2903/2904
In 1969 IBM had introduced the System/3 entry-level machine, which began to cut into sales of the ICL 1901 and 1902 models. To recapture the market, an ICL project known internally as PF73 was started, based on an ICL Stevenage-developed microprogrammed machine known as MICOS-1, which came to market in 1973 as the ICL 2903 and 2904. Despite their New Range numbering, these machines used the ICL 1900 instruction set and ran 1900 software, although a microprogram was available that provided an IBM-360 instruction set to allow them to run IBM software. The 2903/2904 were released with an RPG compiler to better compete with System/3. It was a commercial success and almost 3000 machines were sold.
ME29
Based on a fully microprogrammed CPU, the Stanford EMMY commercialised by Palyn Associates, the ME29 was sold as a replacement for the 2903 and 2904, still executing the 1900 order code.
An EMMY processor emulating the IBM 360 order code was estimated to be around the speed of an IBM System/360 Model 50, implying that the ME29 was faster than the original ICT 1904, approaching the speed of the ICT 1906.
IBM 370/145
In an effort to increase sales to ICL customers, and to profit from the difficulties ICL were having moving customers from the 1900 to the New Range, IBM introduced a microcode package for the 370/145 allowing execution of 1900 series programs.
Odra 1300 series
The Odra 1300 series (Odra 1304, Odra 1305 and Odra 1325) were a range of 1900 compatible machines built by Elwro in Wrocław, Poland between 1971 and 1978. By agreement with ICL the Odra machines ran standard ICL software (executive E6RM, George 3).
ICL 2900 (New Range) systems
Second generation "S3E" (microcoded) versions of the larger New Range systems (such as the 2960/2966 from West Gorton, and the later 2940/50 from Stevenage), could run 1900 series code under DME (Direct Machine Environment) as an emulation as well as the New Range instruction set under the newer VME (Virtual Machine Environment). Later CME (Concurrent Machine Environment) microcode was developed, which allowed DME and VME to co-exist (and run) concurrently on the same platform, similar to the functionality offered by virtualisation software such as VMware today.
Operating systems
Executive
The FP6000 ran under the control of operators executive, a simple operating system that allowed the operator using the system console to load programs from magnetic tape, cards or paper tape, allocate peripherals to programs and attribute priorities to running programs. Executive performed all the I/O operations on behalf of user programs, allowing allocation of different peripherals as needed.
Despite its simplicity executive was, for the time, quite powerful, allocating memory to programs as needed (rather than the fixed partitions provided by OS/360). This was possible because the FP6000 design contained hardware to aid multi-programming, datum and limit registers which made programs address independent and avoided one program accessing the memory allocated to another.
To allow more efficient use of peripherals, as well running multiple programs simultaneously, executive allowed a limited multi-threading within programs (each program could be split into up to four sub-programs, sharing the same address space, which were also time shared. While one sub-program was waiting for peripheral activity another could continue processing).
An extended version of the FP6000 executive was provided with the ICT 1904/1905, and new versions were written for the ICT 1906/7 and ICT 1901/2/3. An important task of these different versions was to hide the hardware differences between the different machines, providing emulation of missing instructions as extracodes. The concept was that applications, and later operating systems, were written to run on the combination of the hardware and the executive, and so would run on any member of the series, no matter how different the underlying hardware was.
With the introduction of magnetic disk systems executive became more complex, using overlaying to reduce its memory footprint. Disk based executives included features to simplify disk operations, handling file management (creation, renaming, deletion, resizing) on behalf of user programs. Files were identified by 12 character names and a user program did not need to know which physical disk was being used for a file.
GEORGE
In December 1964 ICT set up an Operating Systems Branch to develop a new operating system for the 1906/7. The branch was initially staffed with people being released by the end of work on the OMP operating system for the Ferranti Orion. The initial design of the new system, named George in part after George E. Felton, head of the Basic Programming Division, was based on ideas from the Orion and the spooling system of the Atlas computer.
The initial versions, George 1 (for the ICT 1901, 1902 and 1903 machines) was a simple batch processing system. Job descriptions were read in from cards or paper tape, peripherals and magnetic tape files were dynamically allocated to the job which was then run, producing output on the line printer.
George 2 added the concept of spooling. Jobs and input data were read in from cards or paper tape to an input well on disk or tape. The jobs were then run, writing output to disk or tape spool files, which were then written to the output peripherals. The input/processing/output stages were run in parallel, increasing machine utilisation. On larger machines it was possible to run multiple jobs simultaneously.
George 1 and 2 ran as simple programs under executive (with trusted status that allowed them to control user programs). George 3 was a complete operating system in itself, it used a much reduced executive responsible only for handling low level hardware access. George 3 implemented both batch processing and Multiple online programming (MOP) – interactive use from terminals.
George 4 was introduced with the availability of paging hardware on the later machines and implemented paged virtual memory instead of the simple swapping used by George 3.
Minimop and Maximop
Programming languages
ICT initially provided the PLAN assembly language and later the "big three" high-level languages: ALGOL 60, COBOL and FORTRAN 66.
The compilers were released in various versions, of increasing sophistication. Initially paper tape and cards were used for input and output; later magnetic tape and finally disk files. The first versions of the compilers ran in very limited space, starting around 4K words for PLAN and NICOL and as little as 16K words for FORTRAN and ALGOL. Later versions for the George 3 and 4 operating systems expanded to sizes as large as 48K words.
Other languages available included:
PLASYD – an alternative assembly language modeled on PL/360, much used by the Atlas Computer Laboratory.
NICOL – the NIneteen Hundred COmmercial Language. A simple report generation language in the RPG vein, much used on the small 1901 replacing card tabulator systems.
JEAN – a dialect of JOSS, a conversational language similar in capabilities to BASIC.
SOBS – the Southampton BASIC System.
POP-2 – from the University of Edinburgh, a stack-based list-processing language.
ALGOL 68R – the Royal Radar Establishment wrote one of the first Algol 68 compilers for the 1900.
Pascal – Queen's University Belfast initially ported the CDC Pascal compiler to the 1900, then wrote a completely new and well-engineered replacement.
FORTRAN 77 – the University of Salford produced a FORTRAN 77 compiler for George 3. It was unusual in that it used 8-bit characters and the ASCII character set internally. Silverfrost FTN95, a Fortran 95 compiler for Windows is a distant descendant.
BCPL – Bernard Sufrin ported Martin Richards's IBM 360 compiler to the 1900 architecture in mid-1969 at Essex University. BCPL is antecedent of C.
Applications software
Like many contemporary machines much application software was bundled with the basic system, including the compilers and utility programs. Other software was available as paid options from ICT or other sources, including such exotic packages as Storm Sewer Design and Analysis.
SCAN – Stock control system (Acronym: Stock Control and Analysis on Nineteen-hundred)
PERT – Project management system (Acronym: Project Evaluation and Review Technique)
PROSPER – Financial planning system (not the forerunner of today's spreadsheet programs that were originated by accountants more than one hundred years ago in the form of Analysis Ledgers). PROSPER (Profit Simulation, Planning and Evaluation of Risk) package extended the previous work contained in PROP (Profit Rating of Projects).
NIMMS – Production control system (Acronym: Nineteen-hundred Integrated Modular Management System)
PROMPT – Production control system (Acronym: Production Reviewing Organising and Monitoring of Performance Techniques)
COMPAY – Company payroll program
DATADRIVE and DATAVIEW – Online data entry and enquiry system, capable of driving a large number of terminals
FIND – File Interrogation of Nineteenhundred Data (data analysis package)
Filetab – A tool for generating reports based on decision tables. Filetab was marketed by the National Computing Centre (NCC), set up by the British Government in Manchester. Initially, it was a very flexible, parameter-driven report generator with later versions allowing extensive file handling capabilities. The product was first known as NITA (Nineteen Hundred Tabulator) and later became known as TABN (Tabulator Nineteen Hundred). It would run on the ICL 1900 Series of machines, and later on both the 2900 Series and 3900 Series computers. TABN statements were either interpreted from punched cards at run-time, or they could be compiled to produce a program that could simply be executed. One of the attractions of writing programs in Filetab was its short development time.
References
Notes
Citations
External links
Guide to running George 3 on a raspberry pi at rs-online.com
1900
Computer-related introductions in 1964
Early British computers
24-bit computers |
2030760 | https://en.wikipedia.org/wiki/Homebuilt%20computer | Homebuilt computer | A custom-built or homebuilt computer is a computer assembled from available components, usually commercial off-the-shelf (COTS) components, rather than purchased as a complete system from a computer system supplier, also known as pre-built systems.
Custom-built or homebuilt computer is usually considered cheaper to assemble as compared to buying a pre-built computer, since it excludes the labour associated with building a computer, and instead the labor is done by the end-user in assembling their own homebuilt computer.
Homebuilt computers are almost always used at home, like home computers, but home computers were traditionally purchased already assembled by the manufacturer. However, there were kits that were both home computers and homebuilt computers, like the Newbear 77-68, which the owner was expected to assemble and use in his or her home.
History
Computers have been built at home for a long time, starting with the Victorian era pioneer Charles Babbage in the 1820s. A century later, Konrad Zuse built his own machine when electromechanical relay technology was widely available. In 1965, electronics engineer James Sutherland started building a computer out of surplus parts from his job at Westinghouse. The hobby really took off with the early development of microprocessors, and since then many enthusiasts have constructed their own computers.
Early examples include the Altair 8800 from the United States and the later British Newbear 77-68 and Nascom designs from the late 70's and early 80's. Some were made from kits of components, or simply distributed as board designs like the Ferguson Big Board. The Altair 8800 pioneered the S-100 bus which somewhat simplified the process. Ultimately, the development of home computers, the IBM PC (and its derivatives and clones), and the industry of specialized component suppliers that grew up around this market in the mid 80's have made building computers much easier. Computer building is no longer limited to specialists. Computers based on Apple Macintosh and Amiga computer platforms often can not be built in general by users legally because of patents and licenses for their hardware, firmware, and software.
Development as a hobby
At one time building desktop PCs was a popular hobby. Not only could someone build a desktop that outperformed pre-built models selling in retail stores, but someone building their own computer may add whatever components they want, from multiple hard drives, case mods, high-performance graphics cards, liquid cooling, multi-head high-resolution monitor configurations, or using alternative operating systems without paying the "Microsoft tax". As pre-built computers improved in quality and performance, and manufacturers offered more options, it became less cost-effective for most users to build their own computers, and the hobby declined. The growing popularity of laptops and tablets led to a mobile first design methodology that is difficult for home builders to duplicate economically. Recently PC parts have become cheaper, and people are starting to build computers again. With the rise of virtual reality headsets (VR) such as the HTC Vive, the demand for high performance has risen. Competitive games with their own dedicated tournaments have brought about more builders due to more effective customization in performance.
Standardization
Practically all PCs and some laptops are built from readily interchangeable standard parts. Even in the more specialized laptop market, a considerable degree of standardization exists in the basic design, although it may not be easily accessible to end-users. Although motherboards are specialized to work only with either Intel or AMD processors, all other parts like Graphic Processors, RAM, Chassis/Computer cases have been standardized to fit any setup. The availability of standard PC components has led to the development of small scale custom PC assembly. So-called white box PC manufacturers and commercial "build to order" services range in size from small local supply operations to large international operations.
Kits and barebones systems
Computer kits include all of the hardware (and sometimes the operating system software, as well) needed to build a complete computer. Because the components are pre-selected by the vendor, the planning and design stages of the computer-building project are eliminated, and the builder's experience will consist solely of assembling the computer and installing the operating system. The kit supplier should also have tested the components to assure that they are compatible.
A barebones computer is a variation on the kit concept. A barebones system typically consists of a computer case with a power supply, motherboard, processor, and processor cooler. A wide variety of other combinations are also possible: some barebones systems come with just the case and the motherboard, while other systems are virtually complete. In either case, the purchaser will need to obtain and install whatever parts are not included in the barebones kit (typically the hard drive, Random Access Memory, peripheral devices, and operating system).
Like mass-produced computers, barebones systems and computer kits are often targeted to particular types of users, and even different age groups. Because many home computer builders are gamers, for example, and because gamers are often young people, barebones computers marketed as "gaming systems" often include features such as neon lights and brightly coloured cases, as well as features more directly related to performance such as a fast processor, a generous amount of RAM, and a powerful video card. Other kits and barebones systems may be specifically marketed to users of a free software operating system such as Linux or one of the BSD variants, with components guaranteed for compatibility and performance with that operating system.
Scavenged and "cannibalized" systems
Many amateur-built computers are built primarily from used or "spare" parts. It's sometimes necessary to build a computer that will run an obsolete operating system or proprietary software for which updates are no longer available, and which will not run properly on a current platform. Economic reasons may also require an individual to build a new computer from used parts, especially among youth or in developing countries where the cost of new equipment places it out of reach of average people.
Advantages and disadvantages
Building one's own computer affords tangible benefits compared to purchasing a mass-produced model, such as:
To make a computer customized to fit the user's needs in regard to quality, price, and availability.
To recycle an older computer, or to upgrade internal components such as the motherboard, CPU, video card, etc.
To build a high end computer using only top-quality parts for gaming, multimedia, or other demanding tasks.
To avoid trial software and other commission-driven additions that are made to mass-market computers prior to their being shipped.
To ensure the use of industry-standard parts for operating system compatibility or to upgrade the original build at a later date with little hassle.
To ensure that one has all the individual driver and OS discs - many manufactured computers only come with one or two discs, one containing the OS, and another containing the drivers required, plus all the shovelware that was initially installed.
Enjoyment, personal satisfaction, and educational experience.
Tend to use higher quality parts as OEM's tend to user cheaper and lower quality parts.
In most cases, building a computer themselves is much cheaper when they compare the specifications.
There are drawbacks to building one's own PC:
A poorly designed system may have flaws that would be exposed during a manufacturer's testing. A case chosen on the basis of looks may have poor ventilation if the CPU is overclocked.
Someone assembling a PC must educate themselves on how all the components work and how they interact, things like air flow, compatibility of each component with other components, space constraints inside the computer case, PCIe lanes and slots are some of the major points to educate themselves on before building a computer. Studying a guide on building and buying computer components is advised.
The lack of technical support and warranty protection other than what may be provided by the individual component and software vendors. However, a person assembling a PC likely has the expertise to maintain the system, and would require little assistance from manufacturers.
Finding certain components and knowing whether components are compatible or not without prior knowledge on PC parts.
Custom-built computers and alternative operating systems
Because almost all mass-manufactured PCs ship with some version of Microsoft Windows pre-installed, individuals who wish to use operating systems other than Windows (for example, Linux or BSD) often choose to build their own computers. Their reason for doing so is not always related to saving money on an operating system.
Because Microsoft Windows is the de facto standard operating system for PCs, hardware device drivers of different qualities can readily be found that will enable virtually any component designed for the PC architecture to function on a Windows platform. However, the same isn't true for alternative operating systems like Linux and BSD, so these system users have to be careful to avoid hardware that is incompatible with their choice of operating system. Even among hardware devices that technically will "work" with these alternative operating systems, some will work better than others. Therefore, many users of non-Microsoft operating systems choose to build their own computers from components known to work particularly well with their preferred platforms.
A less common but still relevant option for people who choose to go another route when building their own PC and choosing their operating system may choose to configure what is called a Hackintosh system. This means that the user of the computer builds the computer specifically with the Mac OS in mind. This can often be a very tedious process as Apple has strict standards of what hardware they choose works with their software or not. Following previously built systems that worked is very important for success in this area. This is not generally a recommended route, but this hasn't stopped curious enthusiasts from achieving success.
Custom-built computers and high-performance systems
Most mainstream manufactured computers use common or inexpensive parts such as onboard graphics and audio. While integrated accessories offer dramatic economic savings (and satisfy many users), these options generally do not perform as well as dedicated hardware under high demand situations such as current games, CAD and media production.
Homebuilt computers are most common among gamers, engineers, or other people who demand more performance from a specific component than the average user. An example would be a gamer using a slightly behind-the-curve CPU and disk drive, spending the difference on a more capable dedicated graphics card.
Additionally, those with more specific computer needs usually appreciate being able to upgrade certain components to fit their needs and the evolving needs of the software being used; in a typical manufactured PC the support components (such as power supply unit, motherboard, or even the chassis) are unfit for accepting high-performance add-in components. Constructing a system with future expansion in mind allows for such upgrades, which in turn are much cheaper than buying a brand new computer every time individual components become obsolete or insufficient to meet the needs of the user.
High-end PCs most often fall in the realm of heavy processor and/or memory usage applications such as a multimedia PC, home theater PC, music production, engineering, and many more. Generally a high-end system is capable of meeting the demands of gaming and can be used as such. A major difference between a high-end PC and a gaming PC is likely to only be the choice in video card since they will share a majority of other components. While a general-purpose high-end computer may be put to use in a render farm or as a file server, and be provisioned with components targeted at this use (such as a fast GPU for rendering or high-performance storage for serving files), most gaming takes place in real time so with a gaming PC all the components matter in creating a flawless and seamless experience. A less-intensive type of build satisfies or exceeds the needs of most computer users.
See also
White box (computer hardware)
Hackintosh
Barebone computer
Enthusiast computing
References
External links
Personal computers
DIY culture |
443015 | https://en.wikipedia.org/wiki/Hardware%20abstraction | Hardware abstraction | Hardware abstractions are sets of routines in software that provide programs with access to hardware resources through programming interfaces. The programming interface allows all devices in a particular class C of hardware devices to be accessed through identical interfaces even though C may contain different subclasses of devices that each provide a different hardware interface.
Hardware abstractions often allow programmers to write device-independent, high performance applications by providing standard operating system (OS) calls to hardware. The process of abstracting pieces of hardware is often done from the perspective of a CPU. Each type of CPU has a specific instruction set architecture or ISA. The ISA represents the primitive operations of the machine that are available for use by assembly programmers and compiler writers. One of the main functions of a compiler is to allow a programmer to write an algorithm in a high-level language without having to care about CPU-specific instructions. Then it is the job of the compiler to generate a CPU-specific executable. The same type of abstraction is made in operating systems, but OS APIs now represent the primitive operations of the machine, rather than an ISA. This allows a programmer to use OS-level operations (e.g. task creation/deletion) in their programs while retaining portability over a variety of different platforms.
Overview
Many early computer systems did not have any form of hardware abstraction. This meant that anyone writing a program for such a system would have to know how each hardware device communicated with the rest of the system. This was a significant challenge to software developers since they then had to know how every hardware device in a system worked to ensure the software's compatibility. With hardware abstraction, rather than the program communicating directly with the hardware device, it communicates to the operating system what the device should do, which then generates a hardware-dependent instruction to the device. This meant programmers didn't need to know how specific devices worked, making their programs compatible with any device.
An example of this might be a "Joystick" abstraction. The joystick device, of which there are many physical implementations, is readable/writable through an API which many joystick-like devices might share. Most joystick-devices might report movement directions. Many joystick-devices might have sensitivity-settings that can be configured by an outside application. A Joystick abstraction hides details (e.g., register format, I2C address) of the hardware so that a programmer using the abstracted API, does not need to understand the details of the device's physical interface. This also allows code reuse since the same code can process standardized messages from any kind of implementation which supplies the "joystick" abstraction. A "nudge forward" can be from a potentiometer or from a capacitive touch sensor that recognises "swipe" gestures, as long as they both provide a signal related to "movement".
As physical limitations (e.g. resolution of sensor, temporal update frequency) may vary with hardware, an API can do little to hide that, other than by assuming a "least common denominator" model. Thus, certain deep architectural decisions from the implementation may become relevant to users of a particular instantiation of an abstraction.
A good metaphor is the abstraction of transportation. Both bicycling and driving a car are transportation. They both have commonalities (e.g., you must steer) and physical differences (e.g., use of feet). One can always specify the abstraction "drive to" and let the implementor decide whether bicycling or driving a car is best. The "wheeled terrestrial transport" function is abstracted and the details of "how to drive" are encapsulated.
Examples of "abstractions" on a PC include video input, printers, audio input and output, block devices (e.g. hard disk drives or USB flash drive), etc.
In certain computer science domains, such as operating systems or embedded systems, the abstractions have slightly different appearances (for instance, Operating Systems tend to have more standardized interfaces), but the concept of abstraction and encapsulation of complexity are common, and deep.
The hardware abstraction layer reside below the application programming interface (API) in a software stack, whereas the application layer (often written in a high level language) resides above the API and communicates with the hardware by calling functions in the API.
In operating systems
A hardware abstraction layer (HAL) is an abstraction layer, implemented in software, between the physical hardware of a computer and the software that runs on that computer. Its function is to hide differences in hardware from most of the operating system kernel, so that most of the kernel-mode code does not need to be changed to run on systems with different hardware. On Microsoft Windows, HAL can basically be considered to be the driver for the motherboard and allows instructions from higher level computer languages to communicate with lower level components, but prevents direct access to the hardware.
CP/M (CP/M BIOS), DOS (DOS BIOS), Solaris, Linux, BSD, macOS, and some other portable operating systems also have a HAL, even if it is not explicitly designated as such. Some operating systems, such as Linux, have the ability to insert one while running, like Adeos. The NetBSD operating system is widely known as having a clean hardware abstraction layer which allows it to be highly portable. As part of this system are /, , and other subsystems. Popular buses which are used on more than one architecture are also abstracted, such as ISA, EISA, PCI, PCIe, etc., allowing drivers to also be highly portable with a minimum of code modification.
Operating systems having a defined HAL are more easily portable across different hardware. This is especially important for embedded systems that run on dozens of different platforms.
Microsoft Windows
The Windows NT kernel has a HAL in the kernel space between hardware and the executive services that are contained in the file NTOSKRNL.EXE under %WINDOWS%\system32\hal.dll. This allows portability of the Windows NT kernel-mode code to a variety of processors, with different memory management unit architectures, and a variety of systems with different I/O bus architectures; most of that code runs without change on those systems, when compiled for the instruction set applicable to those systems. For example, the SGI Intel x86-based workstations were not IBM PC compatible workstations, but due to the HAL, Windows 2000 was able to run on them.
Since Windows Vista and Windows Server 2008, the HAL used is automatically determined during startup.
AS/400
An "extreme" example of a HAL can be found in the System/38 and AS/400 architectures, currently implemented in the IBM i operating system. Most compilers for those systems generate an abstract machine code; the Licensed Internal Code, or LIC, translates this virtual machine code into native code for the processor on which it is running and executes the resulting native code. (The exceptions are compilers that generate the LIC itself; those compilers are not available outside IBM.) This was so successful that application software and operating system software above the LIC layer that were compiled on the original S/38 run without modification and without recompilation on the latest AS/400 systems, despite the fact that the underlying hardware has been changed dramatically; at least three different types of processors have been in use.
Android
Android introduced a HAL known as the "vendor interface" (codenamed "Project Treble") on version 8.0 "Oreo". It abstracts low-level code from the Android OS framework, and they must be made forward compatible to support future versions of Android to ease the development of firmware updates. Before Project Treble Android relied on various non-standardized legacy HALs.
See also
Basic Input/Output System (BIOS)
Unified Extensible Firmware Interface (UEFI)
Firmware
Advanced Configuration and Power Interface (ACPI)
Device tree
Board support package (BSP)
DeviceKit
Haiku Device Kit
HAL (software)
Hardware-dependent software (HDS)
Nanokernel
Picokernel
Protection ring
References
Further reading
Operating system technology
Firmware |
5277267 | https://en.wikipedia.org/wiki/Architectural%20pattern | Architectural pattern | An architectural pattern is a general, reusable solution to a commonly occurring problem in software architecture within a given context. The architectural patterns address various issues in software engineering, such as computer hardware performance limitations, high availability and minimization of a business risk. Some architectural patterns have been implemented within software frameworks.
The use of the word "pattern" in the software industry was influenced by similar concepts as expressed in traditional architecture, such as Christopher Alexander's A Pattern Language (1977) which discussed the practice in terms of establishing a pattern lexicon, prompting the practitioners of computer science to contemplate their own design lexicon.
Usage of this metaphor within the software engineering profession became commonplace after the publication of Design Patterns (1994) by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides—now commonly known as the "Gang of Four"—coincident with the early years of the public Internet, marking the onset of complex software systems "eating the world" and the corresponding need to codify the rapidly sprawling world of software development at the deepest possible level, while remaining flexible and adaptive.
Architectural patterns are similar to software design patterns but have a broader scope.
Definition
Even though an architectural pattern conveys an image of a system, it is not an architecture. An architectural pattern is a concept that solves and delineates some essential cohesive elements of a software architecture. Countless different architectures may implement the same pattern and share the related characteristics. Patterns are often defined as "strictly described and commonly available".
Architectural style
Following traditional building architecture, a 'software architectural style' is a specific method of construction, characterized by the features that make it notable.
Some treat architectural patterns and architectural styles as the same, some treat styles as specializations of patterns. What they have in common is both patterns and styles are idioms for architects to use, they "provide a common language" or "vocabulary" with which to describe classes of systems.
The main difference is that a pattern can be seen as a solution to a problem, while a style is more general and does not require a problem to solve for its appearance.
Examples
Here is a list of architecture patterns, and corresponding software design patterns and solution patterns.
Some additional examples of architectural patterns:
Blackboard system
Broker pattern
Event-driven architecture
Implicit invocation
Layers
Hexagonal architecture
Microservices
Action–domain–responder,
Model–view–controller
Presentation–abstraction–control
Model–view–presenter
Model–view–viewmodel
Entity component system
Entity-control-boundary
Multitier architecture (often three-tier or n-tier)
Object-oriented programming
Naked objects
Operational data store (ODS)
Peer-to-peer
Pipe and filter architecture
Service-oriented architecture
Space-based architecture
Distributed hash table
Publish–subscribe pattern
Message broker
Hierarchical model–view–controller
See also
List of software architecture styles and patterns
Process Driven Messaging Service
Enterprise architecture
Common layers in an information system logical architecture
References
Bibliography
Software design patterns |
22227464 | https://en.wikipedia.org/wiki/James%20Andrew%20Lewis | James Andrew Lewis | James Andrew Lewis is a Senior Vice President and the Director of the Technology and Public Policy Program at the Center for Strategic and International Studies (CSIS) in Washington, D.C.
Life
Before joining CSIS, he was a member of the U.S. Foreign Service and Senior Executive Service, where he worked on regional security, military intervention and insurgency, conventional arms negotiations, technology transfer (including global arms sales), encryption, internet security, space remote sensing, high-tech trade with China, sanctions and Internet policy.
His diplomatic experience included negotiations on military basing in Asia, the Cambodia peace process, and the Five-power talks on arms transfer restraint. Lewis led the U.S. delegation to the Wassenaar Arrangement Experts Group for advanced civil and military technologies. He was also assigned to the U.S. Southern Command for Just Cause, the U.S. Central Command for Desert Shield, to the National Security Council and the U.S. Central American Task Force. At Commerce, he was responsible for policy and regulation affecting, satellites, high-performance computers, and encryption. He was the Department lead for the Select Committee on U.S. National Security and Military/Commercial Concerns with the People's Republic of China. Lewis served as Rapporteur for the 2010, 2013, and 2015 UN Group of Government Experts on Information Security.
Lewis has authored more than two hundred publications since coming to CSIS, , on cybersecurity, innovation, military space, and identity management. He was the Project Director for CSIS’s Commission on Cybersecurity for the 44th Presidency and led a long-running Track 1.5 Dialogue on cybersecurity with the China Institute of Contemporary International Relations (CICIR) He has testified numerous times before Congress. Lewis earned a Ph.D. from the University of Chicago.
Works
References
External links
1953 births
Living people
University of Chicago alumni |
6264581 | https://en.wikipedia.org/wiki/Functional%20testing | Functional testing | Functional testing is a quality assurance (QA) process and a type of black-box testing that bases its test cases on the specifications of the software component under test. Functions are tested by feeding them input and examining the output, and internal program structure is rarely considered (unlike white-box testing). Functional testing is conducted to evaluate the compliance of a system or component with specified functional requirements. Functional testing usually describes what the system does.
Since functional testing is a type of black-box testing, the software's functionality can be tested without knowing the internal workings of the software. This means that testers do not need to know programming languages or how the software has been implemented. This, in turn, could lead to reduced developer bias (or confirmation bias) in testing since the tester has not been involved in the software's development.
Functional testing does not imply that you are testing a function (method) of your module or class. Functional testing tests a slice of functionality of the whole system.
Functional testing differs from system testing in that functional testing "verifies a program by checking it against ... design document(s) or specification(s)", while system testing "validate[s] a program by checking it against the published user or system requirements."
Types
Functional testing has many types:
Smoke testing
Sanity testing
Regression testing
Usability testing
Six Steps
Functional testing typically involves six steps
The identification of functions that the software is expected to perform
The creation of input data based on the function's specifications
The determination of output based on the function's specifications
The execution of the test case
The comparison of actual and expected outputs
To check whether the application works as per the customer need.
See also
References
Software testing |
346610 | https://en.wikipedia.org/wiki/Chown | Chown | The command , an abbreviation of change owner, is used on Unix and Unix-like operating systems to change the owner of file system files, directories. Unprivileged (regular) users who wish to change the group membership of a file that they own may use .
The ownership of any file in the system may only be altered by a super-user. A user cannot give away ownership of a file, even when the user owns it. Similarly, only a member of a group can change a file's group ID to that group.
The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. The command has also been ported to the IBM i operating system.
Syntax
chown name_of_new_owner file_name
chown newuser:newgroup file_name
See also
chgrp
chmod
takeown
References
External links
chown manual page
The chown Command by The Linux Information Project (LINFO)
Operating system security
Standard Unix programs
Unix SUS2008 utilities |
59439318 | https://en.wikipedia.org/wiki/George%20Pusenkoff | George Pusenkoff | George Pusenkoff (; born 1953 in Krasnapolle), Belarus, is a German-Belarusian painter, installation artist and photographer. He is a representative of postmodernism.
Biography
George Pusenkoff studied Computer Science from 1971 to 1976 at the National Research University of Electronic Technology in Moscow. From 1977 to 1983 he studied art (Graphics and Painting) at the then Moscow Polygraphic Institute, today the Moscow State University of Printing Arts. Since 1984 he has participated in exhibition projects in Moscow, the whole USSR and abroad. During his time in the USSR, Pusenkoff was one of the Russian Nonconformists. In 1987 he joined the artists' association Ermitage and in 1988 he became a member of the Moscow Gruppe 88. He has also been a member of the MOSKh (Moscow Union of Artists) since 1987. On invitation of the gallery owner Hans Mayer, George Pusenkoff went to (Germany) in 1990 and has lived and worked in Cologne ever since. Pusenkoff is Jewish. In 2008 Pusenkoff was nominated for the Kandinsky Prize.
Artistic work
In his works, George Pusenkoff often refers to art-historically significant events of the 20th century. In the beginnings of his artistic career Pusenkoff was close to Appropriation Art, since the 2000s he has increasingly turned to Abstract Art. His paintings are now dominated by color, line, and surface. These works in which he quotes from art history appear very catchy and "familiar", because the viewer already knows them in other contexts, such as the famous The Black Square by Kasimir Malevich. Pusenkoff also quotes or modifies, for example, works by Josef Albers (Homage to Albers, 1998), Robert Rauschenberg (Erased Rauschenberg, 1997), Piet Mondrian (Mondrian 2, 1999) and other important artists of historical significance. These works of Pusenkoff are to be understood and perceived as a statement about the present; cultural criticism takes immanently place in the image. For Pusenkoff, good art is always also an artistic examination of the Zeitgeist. For him this means dealing with the upheavals of our time through the emergence of computers on an artistic level: "Pusenkoff is a conceptual painter in the sense that he doesn't work spontaneously and intuitively, but that a reflection on questions of image formation, perception, the original and painting in the media age is the basis of his art." (in German: "Pusenkoff ist ein konzeptueller Maler in dem Sinne, daß [!] er nicht spontan und intuitiv ans Werk geht, sondern ein Nachdenken über Fragen der Bildentstehung, der Wahrnehmung, des Originals und der Malerei im Medienzeitalter Grundlage seiner Kunst ist.")
In 1993 George Pusenkoff had the opportunity of a solo exhibition in a room of the Tretyakov Gallery. The room, which was not originally designed as an exhibition space, was dominated by a 42-meter-long front window, which directed the viewer's gaze outwards and not to the exhibited works of art inside. Pusenkoff developed an installation for this room in which the front window was covered by a wooden wall. The works of art were then installed on this wall: "To block off an enormous windowfront, I built a wall measuring six meters high and 42 meters long. This surface was covered by 24 paintings, each two by two meters, as well as 600 smaller copies of these arranged to a particular pattern. The whole resembled an endlessly unfolding molecular matrix. Of major importance was the contrast between the space of the room and that of the window-wall."
Pusenkoff's painting said Duchamp, in which he integrated an image of Mona Lisa into his work for the first time, was created especially for this exhibition. It shows a smiling Frank Sinatra as a reminiscence of Readymades of Marcel Duchamp and the Mona Lisa. George Pusenkoff explains: "Actually, the picture was thus created for a very special place on this wall and became a key work for the entire installation." In 2008 the installation The Wall was shown for the first time in the West on the occasion of a comprehensive art exhibition in the Kunstmuseum Bochum La Condition Humaine.
The fusion of digital techniques with representations of well-known icons of art history leads to works of art that are striking and reminiscent of Pop Art. Pusenkoff's art was therefore often compared with Andy Warhols. Like Warhol, Pusenkoff also uses reproductions and sequences, uses bright colors, and thematizes the comprehensive availability of art objects in the media age. Warhol's art was primarily concerned with revealing the industrial manufacturing process of art in The Work of Art in the Age of Mechanical Reproduction. Pusenkoff's theme, on the other hand, is the significance of painting as an antithesis to the computer-generated flood of images of the present: "I love painting," says Pusenkoff, and his entire oeuvre impressively testifies to this preference.
Computer Interface and Pixel
Although Pusenkoff defines himself as a classical painter, he uses the possibilities of the computer to create his works. He loads pictures from the Internet onto his computer, edits them using Photoshop, enlarges or reduces sections, erases them with a digital eraser, etc. He then uses the computer to create his works. In the next work step a plotter produces films, in which the represented motif dissolved in light and dark, and the parts previously marked by Pusenkoff are pre-punched. They are cut out of the foils and transferred to the canvas. The artist then applies eight to twelve layers of acrylic paint, partly mixed with sand, so that relief-like raised parts form in the picture. This technique of image processing is called Pochoir."
In 1996, Pusenkoff painted a work depicting a Windows screen (Big Square (1:1)). The picture shows the screen in detail with all the task bars. At the top is the title of the file Square, next to it the information that this image is displayed in the image editing software in the ratio 1:1, so this image has the same size as the original. Below is information about how much space this file needs – 28 KB. Nothing in the picture reveals an artistic comment of any kind, it is a pure image of a user interface. And yet it is not.
By describing the computer interface as a "window," Microsoft unconsciously gives the computer the role that the panel painting had before the advent of mass media. Already in 1435 the architect Leon Battista Alberti described in his work de pictura the painted picture as a "window to the world". The central element of the new technology is the pixel that constantly returns in Pusenkoff's pictures: "What a good graphics card and a fast computer make forget when the picture appears on the screen, Pusenkoff wants to raise awareness as a painter, namely the pixel and thus the media character, the 'madeness' of the seemingly so immaterial virtual picture worlds." (in German: "Was eine gute Grafikkarte und ein schneller Rechner vergessen lassen, wenn das Bild auf dem Bildschirm erscheint, das möchte Pusenkoff als Maler ins Bewusstsein heben, nämlich das Pixel und damit den medialen Charakter, die 'Gemachtheit' auch der scheinbar so unstofflichen virtuellen Bildwelten.")
Many of Pusenkoff's works show components of a computer screen, and the work titles also repeatedly refer to the possibilities of the computer in the design of images, such as "Cancel" (Who is afraid Cancel Cancel, 1998), "Matrix" (Paint Matrix, 2001) or "Erased" (Erased Painting, 2003). The human being who processes, manipulates, distorts, brings images into other contexts and can share them with other users on the computer seems godlike. He masters everything and is himself untouchable behind his computer. Pusenkoff's works irritate by imitating the screen surface. The viewer believes he sees a work of art that he can change and design. But in reality it is a painted picture that is unchangeable. This irritation is intended by the artist: "Moreover, I am sure that constant irritation is the main condition for the perception of any artistic language."
Virtually you can experience and understand everything, but man is not only his consciousness, he is also and above all a body. The virtual world does not reach it. There is a big difference whether one stands directly in front of a painted work of art, or whether one looks at virtual pictures on the computer. In the one case you stand as a three-dimensional being in a real space in front of the artwork and react to it with your entire body-mind-mind continuum – in the other case you yourself are reduced to a virtual being in which only consciousness counts. By transforming the surface of the virtual screen into a real touchable panel painting, Pusenkoff invites the viewer to become aware of this revolutionary change in his sensory perception. Colour, haptics, the use of light in his works – all this has an effect on the body and mind of the viewer and has an energizing effect. According to Pusenkoff, the computer itself can neither produce art nor create a deep space between viewer and observer that would lead to a resonant vibration. In this context, Pusenkoff's pictures seem like exclamation marks: "Look, this is how the computer changes us".
Mona Lisa in the work of George Pusenkoff
Leonardo da Vincis masterpiece Mona Lisa has become part of our cultural memory and has inspired many artists of the 20th century to create their own works with the painting. George Pusenkoff also dealt with the Mona Lisa. In 1993 he painted his first work with an image of the Mona Lisa. Its title is said Duchamp. Since then, he has repeatedly worked with the image of Mona Lisa, to an extent as no other artist before him:" For Pusenkoff, the Mona Lisa has become his female alter ego, an iconic representation of his own artistic identity."
Single Mona Lisa (1:1)
In 1997 he created probably his best known work of the Mona Lisa with his drawing Single Mona Lisa (1:1). It shows the face of the young woman, colored by Pusenkoff in white, black and yellow. He first downloaded the picture digitally from the Louvre's website, then edited it on the computer and then painted it with acrylic paint on canvas. The work shows – as usual for Pusenkoff – a computer frame around the face of the Mona Lisa, task bars are shown as if one could edit the picture, at the top is the title of the picture Single Mona Lisa (1:1), at the bottom is the file size in megabytes. Again, the illusion is perfect that this is a pure digital copy of Leonardo Da Vinci's artwork. But not only it is a painted picture, also the picture detail, which Pusenkoff chose, differs from the original. George Pusenkoff: "I tried various things with a pixel brush, created concentrations of pixel modules, placed them in different sizes and as accents on different places on the face. I wanted even the abstraction to breathe and vibrate like the original."
The fact that Leonardo da Vinci's work became a media icon in the course of the 20th century is the reason why the Pusenkoff's work can nevertheless create the impression of the original in the viewer. George Pusenkoff says: "When an old Russian icon-painter paints an icon, he doesn't paint the picture of a saint but the saint himself in the form of color. What results is an identification between the painting and that which is painted. My picture can say: I am the Mona Lisa. When you see me, you see the Mona Lisa, you remember her. In the memory of the viewers who have seen my picture, what remains is that they have seen the Mona Lisa, not a picture of the Mona Lisa i have painted."
Mona Lisa Travels
In 1998 the painting Single Mona Lisa was exhibited at the Russian Museum in St. Petersburg. Pusenkoff himself transported the work of art by car from Moscow to St. Petersburg and spontaneously took photographs of the work in front of various backgrounds. As a result, he travelled with the artwork through all of Russia and photographed it in various situations. The photos were taken with a medium format camera and show very different situations. Pusenkoff arranged the picture details according to purely artistic considerations and included effects such as reflections or light and shadow in his conception: "I see a location where i think the Mona Lisa would function well. The goal of the action is to make a photo and not merely to arrange a beautiful or poetic moment. I have to balance the Mona Lisa in the surroundings so that the photo on the one hand seems harmonious but, on the other, captures the surprising act of the moment. And most of the time the performance is also ended with the photo."
Some examples of these illustrations with his work Single Mona Lisa are mentioned here: One can see e.g. market scenes with Pusenkoff's Mona Lisa, a dacha with the picture; Pusenkoff also brought the picture to the ship Aurora, which played an important role in the October Revolution. In the Russian Museum he photographed his work in front of famous Russian works of art of the 19th century; he placed it in front of monuments, positioned it in front of a blue church at the entrance door, showed homeless people in front of the picture and contrasted it with a shot of his Mona Lisa in the casino of St. Petersburg. The most spectacular picture is a shot of an elephant balancing the work in its trunk – a shot made possible because there was a shooting for a movie with an elephant in a Moscow suburb at the time. So far the painting has travelled through Russia, Israel, Germany and Italy.
Mona Lisa Time Tower
In 2002 George Pusenkoff started working on his project Mona Lisa 500. The starting point for this project was an invitation from the Tretyakov Gallery for a solo exhibition in 2004, and the 500th birthday of the original painting. Together with the then director of Museum Ludwig, Marc Scheps, Pusenkoff developed the concept of making 500 versions of his painting Single Mona Lisa (1:1) produced in silkscreen appearing together in one installation. A huge round tower with a diameter of ten meters, an extent of 30 meters and a height of six meters was created. The exterior of the tower consists of 500 squares made of aluminium, measuring 60 × 60 × 4 cm, coated with black industrial lacquer, which brilliantly reflect the surroundings and sunlight. For the entrance to the walk-in installation, 8 squares were removed. From the inside, the viewer is confronted with 500 screen-printed versions of the painting Single Mona Lisa (1:1), gleaming in all the colours of the rainbow. The particular challenge was to bring the colours close to those of a rainbow in a natural-looking colour gradient: "I thought one could simply pick the colors from an RAL chart and the smooth transitions would come about automatically. But with the standard values one doesn't achieve the fluid transitions and the impression that a color wheel seems to close. For that one requires a spectrum of 50 precisely structured tones and half-tones that i developed for this work with the RAL Institute. This color sequence has now been patented; every color has its own number, a code, and all of them together are my little secret." (George Pusenkoff)
Inside the room one hears an endless loop of the music Voice of Mona Lisa. Pusenkoff composed it from archive material of Leonardo da Vinci. The spatial installation was not only exhibited in the Tretyakov Gallery, it was also shown at the 51st International Biennale in Venice in 2005. Since 2005, the tower has been located on the grounds of the Museum Ritter in Waldenbuch, Germany.
Mona Lisa goes Space
On 15 April 2005 – coincidentally the day of the birth date of Leonardo da Vinci – the Russian Soyuz spaceship TMA-6 launched from the spaceport Baikonur to the International Space Station (ISS). On board it had the painting by George Pusenkoff Single Mona Lisa (1:1). It is the ultimate continuation of the project "Mona Lisa Travels" developed by Pusenkoff and could only be realized under difficult conditions. The authorities found the idea good in theory, but constantly put forward new reasons why the project was unrealizable. Only when George Pusenkoff wrote to the then Italian ambassador in Russia, Gianfranco Facco Bonetti, the possibility of realizing the idea arose. The connection between science and art, which was also important in Leonardo da Vinci's life, found here an "actualization" in the form of this journey of an image of Mona Lisa into space. For the action, Pusenkoff's painting was removed from the frame so that it could be rolled. On board the spaceship, the painting was supervised by the Italian astronaut Roberto Vittori. On 25 April 2005, the Soyuz returned to Earth with the painting.
In addition to Pusenkoff's original painting "Single Mona Lisa (1:1)", an artificially created crystal also flew on this mission, on which an image of the painting was applied in nanotechnology: "The actual image of the Mona Lisa is found on a metal plate that measures approximately two by two millimeters and is suspended in a synthetic crystal. On this minute piece of metal there is a tiny point. Within this point, an area is defined that is approximately 1/100th of this point itself, and here there is a relief of the Mona Lisa. To produce this relief, the tip of a needle is electronically charged and with a computer–driven robot guided to the area on which the image should appear. Finally, the oxygen is withdrawn, and everywhere where the tip of the needle has touched the carrier material, it oxidizes. In this way, a relief is created that is built up of molecules."
It is not possible to see this image with the naked eye. Only through the use of a computer that scans signals is it possible to make the work of art visible. The crystal with the image of Mona Lisa painted by Pusenkof is still on the International Space Station and orbits our planet several times a day.
Collections (Selection)
Bolshoi Theatre -Museum, Moscow, Russia
State Russian Museum, St. Petersburg, Russia
State Tretjakov Gallery, Moscow, Russia
Moscow Museum of Modern Art, Russia
Institution of Engineering and Technology (IEE), Moscow, Russia
ART4.RU Contemporary Art Museum, Moscow, Russia
Stella Art Foundation, Moscow, Russia
Collection Daimler-Benz, Stuttgart, Germany
Museum Ludwig, Cologne, Germany
Museum Ludwig, Koblenz, Germany
Ludwig Forum für Internationale Kunst, Aachen, Germany
Kunstmuseum Bochum, Germany
Märkisches Museum Witten, Germany
Jüdisches Museum Westfalen, Dorsten, Germany
Ritter Museum, Waldenbuch, Germany
Collection of the Federal Ministry of Labour, Berlin, Germany
Collection of the Federal State of Hesse
Hessian State Museum (Darmstadt), Germany
Collection Solomon Oppenheim Bank, Cologne, Germany
Museo La Biennale di Venezia, Venice, Italy
Art Collection Rockefeller University, New York City, United Nations
The Copyright-Violation-Process initiated by Helmut Newton (The Power of Blue)
In 1995 George Pusenkoff was sued by the photographer Helmut Newton because he saw in a work of art by Pusenkoff "Power of Blue" a (unauthorized) derivative work of one of his photographs entitled "Miss Livingstone I, Beverly Hills, 1981". Newton argued that Pusenkoff's artwork resembled his work in its critical components so much that it was plagiarism. The Copyright law of Germany clearly regulates in §24 that an "independent" work created in "Fair use (in German "freie Benutzung") of another's work" may be exploited without the author's permission. Pusenkoff referred to this norm. The court now had to decide whether the picture "Power of Blue" was to be regarded as an adaptation or a "fair use". The black and white photography of Helmut Newton shows a female nude from the front, sitting on a folding chair. The background is white, right and left the surroundings can be seen in outlines. The woman sits with her legs widely spread on the chair, one leg bent so that her genitals are clearly visible, and radiates self-confidence. The woman's face is recognizable and at the edge of the photo a stylized, yet recognizable, environment is visible. George Pusenkoff's work, on the other hand, is colored – in the typical deep blue tone quoted from Yves Klein. The nude itself is recognizable only as a silhouette. A yellow square covers the woman's shame and reaches up to her knee. The yellow square is to be understood as a reminiscence of Kasimir Malevich.
The Hanseatisches Oberlandesgericht decided in favor of Pusenkoff, since his processing had so alienated almost all core elements of Helmut Newton's photograph that there was hardly anything left that reminded one of Newton's work of art. The court went into detail about the differences in Newton's and Pusenkoff's works. The judges argued that Newton's metier is that of a photographer who primarily works with light. In contrast, Pusenkoff profession is that of a painter, his field of work is the surface. While Newton is concerned with the objectified representation of eroticism, none of this is visible in George Pusenkoff's work. In his picture, the woman depicted in the nude can only be seen as a silhouette, the yellow square hides her naked shame, the color blue is clearly perceived as a reminiscence of Yves Klein. While Newton insisted that the special pose in which the woman can be seen in Pusenkoff's painting "Power of Blue" was identical to the one in his photograph, the court declared that merely the posture and pose of a photograph were not protected by copyright.
At the time, the process caused a lot of sensation, also internationally, not only because of the plaintiff Helmut Newton, but above all because it was one of the first processes on the subject of the appropriation and alienation of existing works of art, as they were used by postmodern artists, e.g. in Appropriation art.
Exhibitions (Selection)
Solo exhibitions
1991: Galerei Hans Mayer, Düsseldorf, Germany
1993: The Wall, Tretyakov Gallery, Moscow, Russia (This exhibition was also shown in the same year at the Galerie Hans Mayer in Düsseldorf)
1995: Ursula-Blickle-Stiftung, Kraital, Germany
1995: Russian Museum, St. Petersburg, Russia
1997: Simply Virtual, Mannheimer Kunstverein, Mannheim, Germany
1998: Simply Virtual, Museum Ludwig in the Russian Museum, St. Petersburg, Russia
2002: Erased Malevich, Felix Nussbaum Haus, Osnabrück, Germany
2002: George Pusenkoff: Painted and Erased, Märkisches Museum (Witten),Witten, Germany
2003: George Pusenkoff: Erased or Not Erased, Jüdisches Museum Westfalen, Dorsten, Germany
2004: Mona Lisa 500, Tretyakov Gallery, Moscow, Russia
2007: Mona Lisa und das schwarze Quadrat, Museum Ritter, Waldenbuch, Germany
2007: George Pusenkoff: Who is afraid, Moscow Museum of Modern Art (MMoMA), Moscow, Russia
2008: La Condition Humaine, Museum Bochum, Germany
2011: George Pusenkoff: Neo–Gau Malerei, Mannheimer Kunstverein, Germany
2013: Pusenkoff & Pusenkoff: After Reality, (Art project of corresponding works by George Pusenkoff and his son Ilya Pusenkoff) Ludwig Museum Koblenz, Koblenz, Germany (The exhibition was later shown at the Moscow Museum of Modern Art (MMoMA), Moscow, Russia
Group exhibitions
1986: 17 th Exhibition of Young Artists, Moscow, Russia
1987: Rock–Art Parade ASSA, Culture House of the Moscow Electric Lamp Factory, Moscow, Russia
1987: Culture of Visual Art, Hermitage Amateur Society, Moscow, Profsojusnaja 100, Russia
1988: Labyrinth, Youth Palace, Moscow, Russia
1988: 18.th Exhibition of Young Artists, Moscow, Russia
1988: Gruppe 88, Armenian Embassy, Moscow, Russia
1994: Europa – Europa. Das Jahrhundert der Avantgarde in Mittel- und Osteuropa, Kunst- und Ausstellungshalle der Bundesrepublik Deutschland, Bonn, Germany
2002: Abstract Art in Russia, XX Century, Russian Museum, St. Petersburg, Russia
2002: Das Rote Haus, Städtische Galerie Villa Zanders, Bergisch Gladbach, Germany
2002: Kunst nach Kunst,Neues Museum Weserburg Bremen, Germany
2003: Das Recht des Bildes. Jüdische Perspektiven in der modernen Kunst, Museum Bochum, Germany
2003: Das Quadrat in der Kunst, Sammlung Marli Hoppe-Ritter, Museum Ettlingen, Germany
2003: New Countdown – Digital Russia, Guelman Gallery (together with Sony), Central House of Artists, Moscow, Russia
2004: Stella Art Gallery, Moskow, Russia
2004: Moskau – Berlin, State Historical Museum, Moscow, Russia
2005: Faces, Guelman Gallery, Central House of Artists, Moscow, Russia
2005: Russian Pop Art, Tretyakov Gallery, Moscow, Russia
2005: Mona Lisa goes Space, Biennale di Venezia, 51. International Art Exhibition, Italy
2005: Square, Museum Ritter, Waldenbuch, Germany
2005: Between Digital and Analog, Sacral and Profane, 1. Biennale of Contemporary Art, Moscow, Russia
2005: Mona Lisa goes Space, 51. International Art Exhibition, Biennale di Venezia, Italien
2005: Square, Museum Ritter, Waldenbuch, Deutschland
2007: I Believe, 2. Biennale of Contemporary Art, Moscow, Russia
2012: Decoration of the Beautiful. Elitism And Kitsch in Contemporary Art, Tretyakov Gallery, Moscow, Russia
2014: Post Pop: East Meets West, The Saatchi Gallery, London, England
2015: 6th International Biennale Peking: Memory and Dream, National Art Museum of China (NAMOC), Peking, China
2017: 7th International Biennale Peking: The Silk Road and World’s Civilizations, National Art Museum of China (NAMOC), Peking, China
2017: Wanderausstellung Aqua, Chateau de Penthes, Art for The World, Genf, Switzerland
2021: Kein Tag ohne Linie, Museum Ritter, Waldenbuch, Germany
Further reading
External links
Website of George Pusenkoff
George Pusenkoff at Artfacts.net
George Pusenkoff at Kunstaspekte.art
Fair use of a figure from a photograph for a painting – The Judgement in Wording in Wikimedia Commons
References
Attribution
1953 births
Living people
People from Krasnapolle District
20th-century German artists
21st-century German artists
German contemporary artists
German installation artists
Belarusian Jews
German abstract artists
Jewish painters
Modern painters
Postmodern artists
Kandinsky Prize |
3915448 | https://en.wikipedia.org/wiki/Sarkar%20Raj | Sarkar Raj | Sarkar Raj () is a 2008 Indian Hindi-language political crime thriller film directed by Ram Gopal Varma. The film is a sequel to the 2005 film Sarkar and the second installment of Sarkar film series. The film was premiered at the 2008 Cannes Film Festival, the New York Asian Film Festival, and the 9th IIFA World Premiere-Bangkok.
The film was archived at the Academy of Motion Pictures library. The primary cast features Amitabh Bachchan, Abhishek Bachchan (who reprise their roles from the original) and new entrant Aishwarya Rai Bachchan. Supriya Pathak, Tanisha Mukherjee and Ravi Kale also reappeared in their respective roles from Sarkar. The film released on 6 June 2008 and was critically and commercially successful. The continuation and third installment Sarkar 3 was released on 12 May 2017 to positive reviews.
Plot
The sequel is chronologically set two years after the original film.
Anita Rajan (Aishwarya Rai Bachchan), is the CEO of Shepard Power Plant based in London, holds a meeting with Mike Rajan (Victor Banerjee), her chairman father, and Hassan Qazi (Govind Namdeo), as a seemingly shady business adviser and facilitator; regarding an ambitious proposal to set up a multibillion-dollar power plant in rural parts of the state of Maharashtra in India.
Qazi states that this project will be impossible due to possible political entanglements. When Anita asks him for a solution, Qazi states that enlisting the support of Subhash Nagre (Amitabh Bachchan) (commonly referred to by his title of Sarkar), who he describes as a criminal in the garb of a popular and influential political leader, might help their cause. Along with the Chief Minister of Maharashtra, Shinde (Shishir Sharma), they approach Sarkar with the idea of this project, who refutes the idea, due to the fact that the power plant will be built in various villages, affecting the livelihood of 40,000 people. However, when Shankar (Abhishek Bachchan) convinces him of the benefits of the project to the state, Sarkar agrees to the proposal. Shankar advises Anita to stay away from Qazi, as he is not trustworthy. Qazi joins hands with Kaanga (Sayaji Shinde), who wants to become the Chief Minister of Maharashtra but could not as Sarkar is the overlord for Shinde's political party. Shankar and Anita begin campaigning in Thackerwadi to gain support of local public for the project. During their chat Shankar mentions that his toughest decision of life was to kill his own elder brother Vishnu, Anita tells that her father never saw her as a daughter and was her boss.
Sanjay Somji (Rajesh Shringarpure), leader of farmer's association is shown to be protesting the Nagre's. Meanwhile, Avanti, now Shankar's wife reveals to him that she is two months pregnant, Shankar also has growing friction in relationship with old family aid Chander (Ravi Kale). On the other hand, Shankar's wife Avanti's car is bombed within the premises of Sarkar's villa, and Avanti is killed, Sarkar who is shaken suffers a shock and is admitted to a hospital. Shankar replaces Bala (Sumit Nijhawan) as head over Chander (Ravi Kale), and asks him to quickly find out who was behind this brutal attack. Kantilal Vohra (Upendra Limaye) come to Sarkar requesting him to shift the project to Gujarat. As Sarkar refuses, Vohra, Kaanga, Qazi, are shown together hatching a plan. Chander calls up Shankar telling Qazi was behind the blast, Shankar shoots Qazi in his house. Mike comes to India and is seen to be meeting Vohra discussing about eliminating Shankar as, they both want only profit and Shankar aims for development for 40,000 villagers living in Thackerwadi also.
Vohra and Kaanga now hire a hit-man to kill Shankar for 5-Crore. While, Shankar and Anita are on a holiday, Anita cautions Shankar about an impending attack on them, a sniper shoots at Shankar six bullets, who later succumbs to his injuries in hospital. A furious Subash suspecting Vohra kidnaps him.
Sarkar tells Anita that his men have killed-Kaanga, Chander, Vohra and her father who was in London as revenge. He also tells her that these people were just pawns and the mastermind behind all this was his own guru, Rao Sahab (Dilip Prabhawalkar) who wanted his grandson Somji to take over Shankar. His guru comes to home to pay tributes to Shankar, where Sarkar shows him his dead grandson. The film ends with Anita becoming Shankar's replacement.
Cast
Amitabh Bachchan as Subhash Nagre
Abhishek Bachchan as Shankar Nagre
Aishwarya Rai Bachchan as Anita Rajan
Tanisha Mukherjee as Avantika, Shankar's wife
Govind Namdeo as Hassan Qazi
Victor Banerjee as Mike Rajan, Anita's father
Supriya Pathak as Pushpa Nagre
Sayaji Shinde as Karunesh Kaanga
Dilip Prabhavalkar as Rao Saab
Sumeet Nijhawan as Bala
Kay Kay Menon as Vishnu Nagare (Cameo from the original film Sarkar)
Upendra Limaye as Kantilal Vohra
Rajesh Shringarpure as Sanjay Somji
Shishir Sharma as Sunil Shinde
Ravi Kale as Chander
Javed Ansari as The Hitman
Reception
Critical reception
The film mostly garnered a positive critical reception. Critic Taran Adarsh from Bollywood Hungama gave the film four stars out of five and noted "Besides its strong content, Sarkar Raj has been filmed exceptionally well with superb performances. Amitabh Bachchan, expectedly, comes up with a terrific performance. He's as ferocious as a wounded tiger in the finale and takes the film to great heights. Abhishek Bachchan is cast opposite the finest actor of this country, yet he sparkles in every sequence. Aishwarya Rai Bachchan is fabulous and delivers her career-best performance." Sify gave a two-star rating and said, "The only reason you might want to catch this is the performance level and the relatively good ending. Amitabh Bachchan is dependably good. Abhishek holds his own, though with a more filled-out character, he could have taken it to another level. Aishwarya is superb in the emotional scenes, but again, is let down by the unforgivably simplistic character sketching." Rediff which also gave a two-star rating noted "This is a watchable".
The Economic Times gave a three star rating out of five and said "Sarkar Raj clearly gains major marks for its clever culmination, which was so much lacking from recent RGV products. The considerately and crisply penned dialogues by Prashant Pandey add a lot of insight to the scenes and depth to the characterizations." Anupama Chopra from NDTV stated "What works here are the performances. The Bachchans-all three of them are in fine form. Despite wonderful performances and nicely done dramatic moments, Sarkar Raj doesn't pack the visceral punch of Sarkar".
Nikhat Kazmi of The Times of India rated the film with three and a half stars and applauded the lead performances saying "This film carries the sequel forward without losing out on the gritty feel and retains the charisma of the central characters". Critic Nathan Southern of MSN gave four stars citing that "Sarkar Raj thrives on its narrative cliffhangers, that the film never once fails to engage the audience; the premise and its characters are rock-solid, its dialogue convincing, and its suspense palpable. Varma and scriptwriter Prashant Pandey pack such unusual twists and double-crosses into the tale that even the most hardened and seasoned moviegoer will find the conclusion impossible to foresee".
Box office reception
Sarkar Raj grossed almost 340 million in India and over $1 million in the USA. The Filmfare Magazine (August 2008 issue) and other media declared it to be among the only four hits in the first half of 2008 (along with Race, Jodhaa Akbar and Jannat).
The producers reported that the movie had earned more than the entire grossings of its hit prequel in its first two weeks itself. According to the year end report of The Free Trade Journal, Sarkar Raj was the seventh highest all-India grosser of the year after (in order) Ghajini, Rab Ne Bana Di Jodi, Golmaal Returns, Singh Is Kinng, Dostana and Race. The trade magazine also reported high international collections. It was declared a super hit grosser at the box office.
Soundtrack
The music is composed by Bapi and Tutul. Lyrics are penned by Sandeep Nath and Prashant Pandey.
Track listing
Accolades
Controversy
Debutante Rajesh Shringarpore's character of Sanjay Somji was also reportedly based on Raj Thackeray, the estranged nephew of political leader Bal Thackeray; thus furthering the general viewpoint that the series is based on Bal Thackeray and his family. Apparently Ram Gopal Verma had even shown Raj Thackeray rushes of the film to allay his fears of being wrongly portrayed.
Sequel
In 2009 Ram Gopal Verma stated that he had no plans finalised for the third instalment in the series and shelved Sarkar 3. However, in 2012 it was reported that the sequel would go ahead once again and currently is in the pre production stage where the script is being written. The film is expected to go on floors at the end of 2013, primarily with the same cast of Amitabh and Abhishek Bachchan although his character dies at the end of this film and also Aishwarya Rai is to be left out.
In August, 2016 director Ram Gopal Varma confirmed Sarkar 3. He told on his Twitter that Abhishek and Aishwarya will not be a part of the third installment.
Notes
References
External links
Indian films
Indian sequel films
Indian gangster films
Bal Thackeray
Indian crime thriller films
Indian crime drama films
Films set in Mumbai
Hindi-language films
Indian political thriller films
2008 crime drama films
2000s political thriller films
2000s Hindi-language films
2008 crime thriller films
2008 films
Films about dysfunctional families
Films about organised crime in India
Films directed by Ram Gopal Varma
Balaji Motion Pictures films |
59844792 | https://en.wikipedia.org/wiki/International%20Conference%20on%20Concurrency%20Theory | International Conference on Concurrency Theory | The International Conference on Concurrency Theory (CONCUR) is an academic conference in the field of computer science, with focus on the theory of concurrency and its applications. It is the flagship conference for concurrency theory according to the International Federation for Information Processing Working Group on Concurrency Theory (WP 1.8). The conference is organised annually since 1988. Since 2015, papers presented at CONCUR are published in the LIPIcs–Leibniz International Proceedings in Informatics, a "series of high-quality conference proceedings across all fields in informatics established in cooperation with Schloss Dagstuhl –Leibniz Center for Informatics". Before, CONCUR papers were published in the series Lecture Notes in Computer Science.
According to CORE Ranking, CONCUR has rank A ("excellent conference, and highly respected in a discipline area").
According to Google Scholar Metrics (as of 20 July 2019), CONCUR has H5-index 21 and H5-median 34.
Editions
32nd CONCUR 2021: Paris, France Online
31st CONCUR 2020: Vienna, Austria Online
30th CONCUR 2019: Amsterdam, the Netherlands
29th CONCUR 2018: Beijing, China
28th CONCUR 2017: Berlin, Germany
27th CONCUR 2016: Québec City, Canada
26th CONCUR 2015: Madrid, Spain
25th CONCUR 2014: Rome, Italy
24th CONCUR 2013: Buenos Aires, Argentina
23rd CONCUR 2012: Newcastle upon Tyne, UK
22nd CONCUR 2011: Aachen, Germany
21st CONCUR 2010: Paris, France
20th CONCUR 2009: Bologna, Italy
19th CONCUR 2008: Toronto, Canada
18th CONCUR 2007: Lisbon, Portugal
17th CONCUR 2006: Bonn, Germany
16th CONCUR 2005: San Francisco, CA, USA
15th CONCUR 2004: London, UK
14th CONCUR 2003: Marseille, France
13th CONCUR 2002: Brno, Czech Republic
12th CONCUR 2001: Aalborg, Denmark
11th CONCUR 2000: Pennsylvania State University, Pennsylvania, USA
10th CONCUR 1999: Eindhoven, The Netherlands
9th CONCUR 1998: Nice, France
8th CONCUR 1997: Warsaw, Poland
7th CONCUR 1996: Pisa, Italy
6th CONCUR 1995: Philadelphia, PA, USA
5th CONCUR 1994: Uppsala, Sweden
4th CONCUR 1993: Hildesheim, Germany
3rd CONCUR 1992: Stony Brook, NY, USA
2nd CONCUR 1991: Amsterdam, The Netherlands
1st CONCUR 1990: Amsterdam, The Netherlands
Concurrency: Theory, Language, And Architecture 1989: Oxford, UK
Concurrency 1988: Hamburg, Germany
Seminar on Concurrency 1984: Pittsburgh, PA, USA
Test-of-Time Award
In 2020, the International Conference on Concurrency Theory (CONCUR) and the IFIP Working Group 1.8 on Concurrency Theory
established the CONCUR Test-of-Time Award.
The goal of the Award is to recognize important achievements in concurrency theory that
have stood the test of time, and were published at CONCUR since its first edition in 1990.
Starting with CONCUR 2024, an award event will take
place every other year, and recognize one or two papers presented at CONCUR in the 4-year period from 20 to 17 years earlier.
From 2020 to 2023 two such award events are combined each year, in order to also recognize achievements that appeared
in the early editions of CONCUR.
2021
Period 1994–1997
David Janin & Igor Walukiewicz: "On the Expressive Completeness of the Propositional mu-Calculus with Respect to Monadic Second Order Logic." (CONCUR 1996)
Uwe Nestmann & Benjamin C. Pierce: "Decoding Choice Encodings" (CONCUR 1996)
Period 1996–1999
Ahmed Bouajjani, Javier Esparza & Oded Maler: "Reachability Analysis of Pushdown Automata: Application to Model-checking" (CONCUR 1997)
Rajeev Alur, Thomas A. Henzinger, Orna Kupferman & Moshe Y. Vardi: "Alternating Refinement Relations" (CONCUR 1998)
2020
Period 1990–1993
Rob van Glabbeek: "The Linear Time-Branching Time Spectrum" (CONCUR 1993)
Søren Christensen, Hans Hüttel & Colin Stirling: "Bisimulation Equivalence is Decidable for all Context-Free Processes" (CONCUR 1992)
Period 1992–1995
Roberto Segala & Nancy Lynch: "Probabilistic Simulations for Probabilistic Processes" (CONCUR 1994)
Davide Sangiorgi: "A Theory of Bisimulation for the pi-Calculus" (CONCUR 1993)
Affiliated events
International Conference on Formal Modeling and Analysis of Timed Systems (FORMATS)
International Conference on Quantitative Evaluation of SysTems (QEST)
See also
List of computer science conferences
List of computer science conference acronyms
List of publications in computer science
Outline of computer science
References
External links
DBLP Page of CONCUR Conferences
Computer science conferences |
16944393 | https://en.wikipedia.org/wiki/Microsoft%20Word%20Viewer | Microsoft Word Viewer | Microsoft Word Viewer is a discontinued freeware program for Microsoft Windows that can display and print Microsoft Word documents. Word Viewer allows text from a Word document to be copied into clipboard and pasted into a word processor. The last version was Word Viewer 2003 Service Pack 3 released in 2007.
According to the license terms of the Microsoft Word Viewer, the software may be installed and used only to view and screen print documents created with Microsoft Office software. The software may not be used for any other purpose. Users may distribute the software only with a file created with Microsoft Office software to enable recipient to view and print the file.
On November 29, 2017, Microsoft announced that Word Viewer would be retired in that month, no longer receive security updates not be available to download. Microsoft recommended for Windows 10 to use the Word Mobile application and for Windows 7 and Windows 8 to upload the file to OneDrive and use Word Online to view and print documents free of charge with a Microsoft account.
Format support
Microsoft Word Viewer supports:
binary Word documents (.doc)
Rich Text Format (.rtf)
Text files (.txt)
HTML (.htm, .html) and MHTML (.mht, .mhtml)
Word XML format (.xml)
WordPerfect v5.x and v6.x files (.wpd)
Microsoft Works documents (.wps)
The Compatibility Pack for the Word, Excel, and PowerPoint File Formats was released on 6 November 2006, providing support to document formats found in Word 2007:
Office Open XML documents (.docx, .docm)
History
Word Viewer 6.0 was 16-bit and corresponded to Word 6.0. Word Viewer 7.0 and 7.1 was 32-bit and corresponded to Word 95. Word Viewer 7.1 would execute WordBasic macros without warning. The typical macro virus would not spread, but it could still execute (including any malicious code).
Word Viewer 97 was released with Word 97. It was available for Windows in 32-bit versions. It can display Word documents in Internet Explorer 3.x and later.
Word Viewer 2003 was released on 15 December 2004. This includes many security enhancements over Word Viewer 97, and was the first version of Word Viewer to receive security updates.
Word Viewer 2003 Service Pack 3 was released on 26 September 2007 with Office 2003 SP3. Microsoft continued to provide security updates until February 2019 (mostly because POSReady 2009 shipped with it).
Development of the product has stopped ever since. In the meantime, Microsoft has made other ways of reading Office documents available, either through Word Online as well as WordPad (a native component of Windows) in Windows 7 and later, which can create, view or edit Office Open XML documents (.docx) alongside Rich Text Format (.rtf) and text files (.txt).
No versions for any other operating system besides Windows were ever released.
See also
Microsoft Excel Viewer
Microsoft PowerPoint Viewer
List of word processors
Comparison of word processors
References
Word Viewer
1999 software
Windows-only freeware |
2317568 | https://en.wikipedia.org/wiki/Ssh-agent | Ssh-agent | Secure Shell (SSH) is a protocol allowing secure remote login to a computer on a network using public-key cryptography. SSH client programs (such as ssh from OpenSSH) typically run for the duration of a remote login session and are configured to look for the user's private key in a file in the user's home directory (e.g., .ssh/id_rsa). For added security (for instance, against an attacker that can read any file on the local filesystem), it is common to store the private key in an encrypted form, where the encryption key is computed from a passphrase that the user has memorized. Because typing the passphrase can be tedious, many users would prefer to enter it just once per local login session. The most secure place to store the unencrypted key is in program memory, and in Unix-like operating systems, memory is normally associated with a process. A normal SSH client process cannot be used to store the unencrypted key because SSH client processes only last the duration of a remote login session. Therefore, users run a program called ssh-agent that runs beyond the duration of a local login session, stores unencrypted keys in memory, and communicates with SSH clients using a Unix domain socket.
Security issues
ssh-agent creates a socket and then checks the connections from ssh. Everyone who is able to connect to this socket also has access to the ssh-agent. The permissions are set as in a usual Linux or Unix system. When the agent starts, it creates a new directory in /tmp with restrictive permissions. The socket is located in this directory.
There is a procedure that may prevent malware from using the ssh-agent socket. If the ssh-add -c option is set when the keys are imported into the ssh-agent, then the agent requests a confirmation from the user using the program specified by the SSH_ASKPASS environment variable, whenever ssh tries to connect.
Ssh-agents can be "forwarded" onto a server you connect to, making their keys available there as well, for other connections. On the local system, it is important that the root user is trustworthy, because the root user can, amongst other things, just read the key file directly. On the remote system, if the ssh-agent connection is forwarded, it is also important that the root user on the other end is trustworthy, because it can access the agent socket on the remote (though not the key, which stays local).
Implementations
There are many different programs that perform the same functionality as the OpenSSH ssh-agent, some with very different user interfaces. PuTTY, for example, uses a graphical user interface in its bundled Pageant ssh-agent.
There are tools designed to provide key-agent functionality for both symmetric and asymmetric keys; these usually provide ssh-agent functionality as one of their application interfaces. Examples include GNOME Keyring and KWallet.
Some monolithic SSH clients include the ability to remember SSH passphrases across sessions. Examples include: SecureCRT.
Apple macOS
On the macOS operating system, ssh-agent has been integrated since Leopard, version 10.5 in 2007. Third-party open-source implementations of ssh-agent were available previously.
Microsoft Windows
OpenSSH-based client and server programs have been included in Windows 10 since version 1803. The SSH client and key agent are enabled and available by default and the SSH server is an optional Feature-on-Demand.
References
External links
ssh-agent man page from OpenSSH release (part of the OpenBSD project).
third-party alternative ssh-agent front-end for Mac OS X
"Using ssh-agent with SSH"
An Illustrated Guide to SSH Agent Forwarding
security aspects
Cryptographic software
Key management |
1273369 | https://en.wikipedia.org/wiki/Action%21%20%28programming%20language%29 | Action! (programming language) | Action! is a procedural programming language and integrated development environment written by Clinton Parker for the Atari 8-bit family. The language, which is similar to ALGOL, compiled to high-performance code for the MOS Technologies 6502 of the Atari computers. Action! was distributed on ROM cartridge by Optimized Systems Software starting in 1983. It was one of the company's first bank-switched "Super Cartridges", with a total of 16 kB of code.
Working with Henry Baker, Parker had previously developed Micro-SPL, a systems programming language for the Xerox Alto. Action! was largely a port of Micro-SPL concepts to the Atari with changes to support the 6502 processor and the addition of an integrated fullscreen editor and debugger.
Action! was used to develop at least two commercial products—the HomePak productivity suite and Games Computers Play client program—and numerous programs in ANALOG Computing and Antic magazines. The editor inspired the PaperClip word processor. The language was not ported to other platforms.
The assembly language source code for Action! was made available under the GNU General Public License by the author in 2015.
History
Micro-SPL
While taking his postgraduate studies, Parker started working part-time at Xerox PARC working on printer drivers. He later moved to the Xerox Alto project where he wrote several games for the system. His PhD was in natural language parsing and he had worked on compiler theory during his graduate work.
Henry Baker and Parker released Micro-SPL in September 1979. Micro-SPL was intended to be used as a systems programming language on the Xerox Alto workstation computer, which was normally programmed in BCPL. The Alto used a microcode system which the BCPL compiler output. Micro-SPL output the same format, allowing BCPL programs to call Micro-SPL programs.
Aside from differences in syntax, the main difference between Micro-SPL and BCPL, and the reason for its existence, was that Micro-SPL produced code that was many times faster than the native BCPL compiler. In general, Micro-SPL programs were expected to run about ten times as fast as BCPL, and about half as fast as good hand-written microcode. In comparison to microcode, they claimed it would take half as long to write and 10% of the time to debug it.
Action!
It was during this period that Parker purchased an Atari for use at home, and was disappointed with the lack of development systems for this platform. This was the impetus for the development of Action!
Parker initially considered releasing the system himself, but later decided to partner with Optimized Systems Software (OSS) for sales and distribution. OSS focused on utilities and programming languages like BASIC XL, so this was a natural fit for Action! Sales were strong enough for Parker to make a living off the royalties for several years.
The IBM PC had C compilers available, and Parker decided there was no point in porting Action! to that platform. As the sales of the Atari 8-bit platforms wound down, in North America at least, OSS wound down as well. Late in its history Action! distribution moved from OSS to Electronic Arts, but they did little with the language and sales ended shortly after.
In a 2015 interview, Parker expressed his surprise in the level of support the language continued to receive, suggesting there appeared to be more interest in it then than there had been in the late 1980s.
Development environment
Action! was one of the earlier examples of the OSS SuperCartridge format. ROM cartridges on the Atari were normally limited to 8 kB, which limited its ability to support larger programs. The SuperCartridge had 16 kB organized as four 4 kB blocks, two of which were visible at any time. The lower 4 kB did not change, and system could bank switch between the other three blocks by changing the value in address $AFFF.
Action! used this design by breaking the system into four sections, the editor, the compiler, a monitor for testing code and switching between the editor and compiler, and the run-time library. The run-time library is stored in the cartridge itself. To distribute standalone applications requires a separate run-time package which was sold by OSS as the Action! Toolkit.
Action! constructs were designed to map cleanly to 6502 opcodes, to provide high performance without needing complex optimizations in the one-pass compiler. For example, local variables are assigned fixed addresses in memory, instead of being allocated on the stack of activation records. This eliminates the significant overhead associated with stack management, which is especially difficult in the case of the 6502's 256-byte stack. However, this precludes the use of recursion.
Unlike the integrated Atari BASIC and Atari Assembler Editor environments, the Action! editor does not use line numbers. It features a full-screen, scrolling display capable of displaying two windows, as well as block operations and global search and replace.
The monitor serves as a debugger, allowing an entire program or individual functions to be run, memory to be displayed and modified, and program execution to be traced.
Data types
Action! has three fundamental data types, all of which are numeric.
BYTE
Internally represented as an unsigned 8-bit integer. Values range from 0 to 255.
The CHAR keyword can also be used to declare BYTE variables.
BYTE age=[21] ; declare age and initialize it to the value 21
BYTE leftMargin=82 ; declare leftMargin at address 82
CARDinal
Internally represented as an unsigned 16-bit integer. Values range from 0 to 65,535.
CARD population=$600 ; declare population and store it at address 1536 and 1537
CARD prevYear, curYear, nextYear ; use commas to declare multiple variables
INTeger
Internally represented as a signed 16-bit integer. Values range from -32,768 to 32,767.
INT veryCold = [-10]
INT profitsQ1, profitsQ2, ; declaring multiple variables can
profitsQ3, profitsQ4 ; span across multiple lines
Action! also has ARRAYs, POINTERs and user-defined TYPEs. No floating point support is provided.
An example of a user-defined TYPE:
TYPE CORD=[CARD x,y]
CORD point
point.x=42
point.y=23
Reserved words
A reserved word is any identifier or symbol that the Action! compiler recognizes as something special. It can be an operator, a data type name, a statement, or a compiler directive.
AND FI OR UNTIL = (
ARRAY FOR POINTER WHILE <> )
BYTE FUNC PROC XOR # .
CARD IF RETURN + > [
CHAR INCLUDE RSH - >= ]
DEFINE INT SET * < "
DO LSH STEP / <= '
ELSE MOD THEN & $ ;
ELSEIF MODULE TO % ^
EXIT OD TYPE ! @
Example code
The following is example code for Sieve of Eratosthenes written in Action!. In order to increase performance, it disables the ANTIC graphics coprocessor, preventing its DMA engine from "stealing" CPU cycles during computation.
BYTE RTCLOK=20, ; addr of sys timer
SDMCTL=559 ; DMA control
BYTE ARRAY FLAGS(8190)
CARD COUNT,I,K,PRIME,TIME
PROC SIEVE()
SDMCTL=0 ; shut off Antic
RTCLOK=0 ; reset the clock to zero
COUNT=0 ; init count
FOR I=0 TO 8190 ; and flags
DO
FLAGS(I)='T ; "'T" is a compiler-provided constant for True
OD
FOR I=0 TO 8190 ; now run the sieve
DO
IF FLAGS(I)='T THEN
PRIME=I+I+3
K=I+PRIME
WHILE K<=8190
DO
FLAGS(K)='F ; "'F" is a compiler-provided constant for False
K==+PRIME
OD
COUNT==+1
FI
OD
TIME=RTCLOK ; get timer reading
SDMCTL=34 ; restore screen
PRINTF("%E %U PRIMES IN",COUNT)
PRINTF("%E %U JIFFIES",TIME)
RETURN
Reception
Brian Moriarty, in a February 1984 review for ANALOG Computing, concluded that Action! was "one of the most valuable development tools ever published for the Atari." He cited the manual as the only weak point of the package, claiming it "suffers from lack of confidence, uncertain organization and a shortage of good, hard technical data."
Leo Laporte reviewed Action in the May/June 1984 edition of Hi-Res. He began the review, "This is the best thing to happen to Atari since Nolan Bushnell figured out people would play ping-pong on a TV screen." Laporte praised the editor, noting its split-screen and cut and paste capabilities and describing it as a "complete word processing system that's very responsive." He said that Action! ran about 200 times as fast as Atari BASIC, concluding that "This language is like a finely tuned racing car."
BYTE in 1985 praised the compilation and execution speed of software written in Action! Using their Byte Sieve benchmark as a test, ten iterations of the sieve completed in 18 seconds in Action!, compared to 10 seconds for assembly and 38 minutes in BASIC. The magazine also lauded the language's editor. BYTE reported that the language resembled C closely enough to "routinely convert programs between the two", and approved of its pointer support. The magazine concluded that "Action! is easy to use, quick, and efficient. It can exploit the Atari's full power. Action! puts programming for the Atari in a whole new dimension."
Ian Chadwick wrote in Mapping the Atari that "Action! is probably the best language yet for the Atari; it's a bit like C and Pascal, with a dash of Forth. I recommend it."
See also
PaperClip, Atari 8-bit word processor from a different author and company, based on the Action! editor
References
Citations
Bibliography
External links
Action! Programming Language Version 3.6 - Source Code, by Optimized Systems Software at archive.org
Action! info at Retrobits.com
The ACTION! Archive
Action! language reference
Effectus cross-compiler
Atari 8-bit family software
ALGOL 68 dialect
Optimized Systems Software
Procedural programming languages
Programming languages created in 1983
Statically typed programming languages
Free and open-source software
Formerly proprietary software
Systems programming languages |
739376 | https://en.wikipedia.org/wiki/Netcraft | Netcraft | Netcraft is an Internet services company based in Bath, Somerset, England. The company provides cybercrime disruption services across a range of industries.
History
Netcraft was founded by Mike Prettejohn. The company provides web server and web hosting market-share analysis, including web server and operating system detection. In some cases, depending on the queried server's operating system, their service is able to monitor uptimes; uptime performance monitoring is a commonly used factor in determining the reliability of a web hosting provider. Netcraft has explored the internet since 1995 and is a respected authority on the market share of web servers, operating systems, hosting providers, ISPs, encrypted transactions, electronic commerce, scripting languages and content technologies on the internet.
As a PCI-DSS approved scanning vendor, Netcraft also provides security testing, and publishes news releases about the state of various networks that make up the Internet.
The company is also known for its free anti-phishing toolbar for the Firefox, Internet Explorer, and Chrome browsers. Starting with version 9.5, the built-in anti-phishing filter in the Opera browser uses the same data as Netcraft's toolbar, eliminating the need for a separately installed toolbar. A study commissioned by Microsoft concluded that Netcraft's toolbar was among the most effective tools to combat phishing on the Internet, although this has since been superseded by Microsoft's own Internet Explorer 7 with Microsoft Phishing Filter, possibly as a result of licensing Netcraft's data. The service can only process public IPv4 servers at the exclusion of IPv6. The browser extensions will display security information for a domain's IPv4 servers even when the user is connected to a different server over IPv6.
In November 2016, Philip Hammond, Chancellor of the Exchequer, announced plans for the UK government to work with Netcraft to develop better automatic defences to reduce the impact of cyber-attacks affecting the UK.
Industry competitors include: Alexa Internet, Compete.com, comScore, Customer knowledge, Hitwise, Nielsen ratings, Quantcast, SimilarWeb, and Spyfu.
See also
Search engine optimization metrics
DShield, Cybercrime analytics
WOT Services, community of volunteer users ranking website reputation
References
External links
Information technology companies of the United Kingdom
Privately held companies of the United Kingdom
Companies based in Bath, Somerset
Software companies established in 1987
1994 establishments in England
British companies established in 1987
Computer security companies
Web analytics |
442736 | https://en.wikipedia.org/wiki/ARP%20spoofing | ARP spoofing | In computer networking, ARP spoofing, ARP cache poisoning, or ARP poison routing, is a technique by which an attacker sends (spoofed) Address Resolution Protocol (ARP) messages onto a local area network. Generally, the aim is to associate the attacker's MAC address with the IP address of another host, such as the default gateway, causing any traffic meant for that IP address to be sent to the attacker instead.
ARP spoofing may allow an attacker to intercept data frames on a network, modify the traffic, or stop all traffic. Often the attack is used as an opening for other attacks, such as denial of service, man in the middle, or session hijacking attacks.
The attack can only be used on networks that use ARP, and requires attacker have direct access to the local network segment to be attacked.
ARP vulnerabilities
The Address Resolution Protocol (ARP) is a widely used communications protocol for resolving Internet layer addresses into link layer addresses.
When an Internet Protocol (IP) datagram is sent from one host to another in a local area network, the destination IP address must be resolved to a MAC address for transmission via the data link layer. When another host's IP address is known, and its MAC address is needed, a broadcast packet is sent out on the local network. This packet is known as an ARP request. The destination machine with the IP in the ARP request then responds with an ARP reply that contains the MAC address for that IP.
ARP is a stateless protocol. Network hosts will automatically cache any ARP replies they receive, regardless of whether network hosts requested them. Even ARP entries that have not yet expired will be overwritten when a new ARP reply packet is received. There is no method in the ARP protocol by which a host can authenticate the peer from which the packet originated. This behavior is the vulnerability that allows ARP spoofing to occur.
Attack anatomy
The basic principle behind ARP spoofing is to exploit the lack of authentication in the ARP protocol by sending spoofed ARP messages onto the LAN. ARP spoofing attacks can be run from a compromised host on the LAN, or from an attacker's machine that is connected directly to the target LAN.
An attacker using ARP spoofing will disguise as a host to the transmission of data on the network between the users. Then users would not know that the attacker is not the real host on the network.
Generally, the goal of the attack is to associate the attacker's host MAC address with the IP address of a target host, so that any traffic meant for the target host will be sent to the attacker's host. The attacker may choose to inspect the packets (spying), while forwarding the traffic to the actual default destination to avoid discovery, modify the data before forwarding it (man-in-the-middle attack), or launch a denial-of-service attack by causing some or all of the packets on the network to be dropped.
Defenses
Static ARP entries
The simplest form of certification is the use of static, read-only entries for critical services in the ARP cache of a host. IP address-to-MAC address mappings in the local ARP cache may be statically entered. Hosts don't need to transmit ARP requests where such entries exist. While static entries provide some security against spoofing, they result in maintenance efforts as address mappings for all systems in the network must be generated and distributed. This does not scale on a large network since the mapping has to be set for each pair of machines resulting in n2-n ARP entries that have to be configured when n machines are present; On each machine there must be an ARP entry for every other machine on the network; n-1 ARP entries on each of the n machines.
Detection and prevention software
Software that detects ARP spoofing generally relies on some form of certification or cross-checking of ARP responses. Uncertified ARP responses are then blocked. These techniques may be integrated with the DHCP server so that both dynamic and static IP addresses are certified. This capability may be implemented in individual hosts or may be integrated into Ethernet switches or other network equipment. The existence of multiple IP addresses associated with a single MAC address may indicate an ARP spoof attack, although there are legitimate uses of such a configuration. In a more passive approach a device listens for ARP replies on a network, and sends a notification via email when an ARP entry changes.
AntiARP also provides Windows-based spoofing prevention at the kernel level. ArpStar is a Linux module for kernel 2.6 and Linksys routers that drops invalid packets that violate mapping, and contains an option to repoison/heal.
Some virtualized environments such as KVM also provide security mechanisms to prevent MAC spoofing between guests running on the same host.
Additionally some ethernet adapters provide MAC and VLAN anti-spoofing features.
OpenBSD watches passively for hosts impersonating the local host and notifies in case of any attempt to overwrite a permanent entry
OS security
Operating systems react differently. Linux ignores unsolicited replies, but, on the other hand, uses responses to requests from other machines to update its cache. Solaris accepts updates on entries only after a timeout. In Microsoft Windows, the behavior of the ARP cache can be configured through several registry entries under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters, ArpCacheLife, ArpCacheMinReferenceLife, ArpUseEtherSNAP, ArpTRSingleRoute, ArpAlwaysSourceRoute, ArpRetryCount.
Legitimate usage
The techniques that are used in ARP spoofing can also be used to implement redundancy of network services. For example, some software allows a backup server to issue a gratuitous ARP request in order to take over for a defective server and transparently offer redundancy.
There are two companies known to-date that have tried to commercialize products centered around this strategy, Circle and CUJO. The latter has recently run into significant problems with its ARP-spoofing strategy in consumer's homes; they have now completely removed that capability and replaced it with a DHCP-based strategy.
ARP spoofing is often used by developers to debug IP traffic between two hosts when a switch is in use: if host A and host B are communicating through an Ethernet switch, their traffic would normally be invisible to a third monitoring host M. The developer configures A to have M's MAC address for B, and B to have M's MAC address for A; and also configures M to forward packets. M can now monitor the traffic, exactly as in a man-in-the-middle attack.
Tools
Defense
Spoofing
Some of the tools that can be used to carry out ARP spoofing attacks:
Arpspoof (part of the DSniff suite of tools)
Arpoison
Subterfuge
Ettercap
Seringe
ARP-FILLUP -V0.1
arp-sk -v0.0.15
ARPOc -v1.13
arpalert -v0.3.2
arping -v2.04
arpmitm -v0.2
arpoison -v0.5
ArpSpyX -v1.1
ArpToXin -v 1.0
Cain and Abel -v 4.3
cSploit -v 1.6.2
SwitchSniffer
APE – ARP Poisoning Engine
Simsang
zANTI -v2
elmoCut
NetSec Framework -v1
Minary
NetCut (Also has a defense feature)
ARPpySHEAR
See also
Cache poisoning
DNS spoofing
IP address spoofing
MAC spoofing
Proxy ARP
References
External links
Ethernet
Internet security
Types of cyberattacks
Hacking (computer security) |
39893 | https://en.wikipedia.org/wiki/Sanity%20check | Sanity check | A sanity check or sanity test is a basic test to quickly evaluate whether a claim or the result of a calculation can possibly be true. It is a simple check to see if the produced material is rational (that the material's creator was thinking rationally, applying sanity). The point of a sanity test is to rule out certain classes of obviously false results, not to catch every possible error. A rule-of-thumb or back-of-the-envelope calculation may be checked to perform the test. The advantage of performing an initial sanity test is that of speedily evaluating basic function.
In arithmetic, for example, when multiplying by 9, using the divisibility rule for 9 to verify that the sum of digits of the result is divisible by 9 is a sanity test—it will not catch every multiplication error, however it's a quick and simple method to discover many possible errors.
In computer science, a sanity test is a very brief run-through of the functionality of a computer program, system, calculation, or other analysis, to assure that part of the system or methodology works roughly as expected. This is often prior to a more exhaustive round of testing.
Mathematical
A sanity test can refer to various orders of magnitude and other simple rule-of-thumb devices applied to cross-check mathematical calculations. For example:
If one were to attempt to square 738 and calculated 54,464, a quick sanity check could show that this result cannot be true. Consider that yet Since squaring positive integers preserves their inequality, the result cannot be true, and so the calculated result is incorrect. The correct answer, is more than 10 times higher than 54,464.
In multiplication, is not 142,135 since 918 is divisible by three but 142,135 is not (digits add up to 16, not a multiple of three). Also, the product must end in the same digit as the product of end-digits: but 142,135 does not end in "0" like "40", while the correct answer does: An even quicker check is that the product of even and odd numbers is even, whereas 142,135 is odd.
Physical
The power output of a car cannot be 700 kJ, since the unit joules is a measure of energy, not power (energy per unit time). This is a basic application of dimensional analysis.
When determining physical properties, comparing to known or similar substances will often yield insight on whether or not the result is reasonable. For instance, most metals sink in water, so the density of most metals should be greater than that of water (~).
Fermi estimates will often provide insight on the order of magnitude of an expected value.
Software development
In software development, a sanity test (a form of software testing which offers "quick, broad, and shallow testing") evaluates the result of a subset of application functionality to determine whether it is possible and reasonable to proceed with further testing of the entire application. Sanity tests may sometimes be used interchangeably with smoke tests insofar as both terms denote tests which determine whether it is possible and reasonable to continue testing further. On the other hand, a distinction is sometimes made that a smoke test is a non-exhaustive test that ascertains whether the most crucial functions of a program work before proceeding with further testing whereas a sanity test refers to whether specific functionality such as a particular bug fix works as expected without testing the wider functionality of the software. In other words, a sanity test determines whether the intended result of a code change works correctly while a smoke test ensures that nothing else important was broken in the process. Sanity testing and smoke testing avoid wasting time and effort by quickly determining whether an application is too flawed to merit more rigorous QA testing, but needs more developer debugging.
Groups of sanity tests are often bundled together for automated unit testing of functions, libraries, or applications prior to merging development code into a testing or trunk version control branch, for automated building, or for continuous integration and continuous deployment.
Another common usage of sanity test is to denote checks which are performed program code, usually on arguments to functions or returns therefrom, to see if the answers can be assumed to be correct. The more complicated the routine, the more important that its response be checked. The trivial case is checking to see whether the return value of a function indicated success or failure and to therefore cease further processing upon failure. This return value is actually often itself the result of a sanity check. For example, if the function attempted to open, write to, and close a file, a sanity check may be used to ensure that it did not fail on any of these actions—which is a sanity check often ignored by programmers.
These kinds of sanity checks may be used during development for debugging purposes and also to aid in troubleshooting software runtime errors. For example, in a bank account management application, a sanity check will fail if a withdrawal requests more money than the total account balance rather than allowing the account to go negative (which wouldn't be sane). Another sanity test might be that deposits or purchases correspond to patterns established by historical data—for example, large purchase transactions or ATM withdrawals in foreign locations never before visited by the card holder may be flagged for confirmation.
Sanity checks are also performed upon installation of stable, production software code into a new computing environment to ensure that all dependencies are met, such as a compatible operating system and link libraries. When a computing environment has passed all of the sanity checks, it's known as a sane environment for the installation program to proceed with reasonable expectation of success.
A "Hello, World!" program is often used as a sanity test for a development environment in a similar fashion. Rather than a complicated script running a set of unit tests, if this simple program fails to compile or execute, it proves that the supporting environment likely has a configuration problem that will prevent any code from compiling or executing. But if "Hello world" executes, then any problems experienced with other programs likely can be attributed to errors in that application's code rather than the environment.
See also
Certifying algorithm
Checksum
Fermi problem
Mental calculation
Proof of concept
References
Software testing
Error detection and correction |
10279391 | https://en.wikipedia.org/wiki/SPIP | SPIP | SPIP (Système de Publication pour l'Internet) is a free software content management system designed for web site publishing, oriented towards online collaborative editing.
The software is designed for easy setup, use and maintenance, and is used in public and private institutions. The last P in the word SPIP stands for both Partagé (shared) and Participatif (participative), in the sense that the software is designed for collective online editing. Its mascot is a flying squirrel.
It is used both by institutional sites, community portals, academic sites, personal webpages, and news sites.
Technology
The software is written in PHP, and relies on one or more SQL databases: MySQL / MariaDB, SQLite or PostgreSQL.
The pages of the site are generated 'on the fly': the contents stored in the database are formatted through presentation 'skeletons' that merge HTML and SPIP's own markup language.
A caching system avoids the generation of pages at each request: when a page is requested, SPIP checks if it doesn't exist in its cache and if it isn't too old, it will be displayed. The life-span of a page is adjustable in its presentation skeleton.
History
SPIP was originally conceived for the uzine.net site, after which its designers released it under GPL License. Since its launch 2001, it has also been used for Le Monde diplomatique newspaper and www.vacarme.eu.org; the webmaster of Le Monde diplomatique is one of the initiators of SPIP,
SPIP integrates a cache mechanism, an authentication system, an automatic setup module and an interface for administration and input of articles. SPIP can create dynamic pages without any PHP knowledge, using a web template system known as 'skeletons''.
In early 2003, the 1.6 version made it possible to display the private back-end interface in several languages. A space for translators is set up in order to multiply the number of available versions.
In January 2004, the 1.7 version of SPIP enables the management of multilingual websites, and implements a search and content indexing module; It also enables syndication of other sites' contents.
In April 2005, the private interface of version 1.8 was reworked in order to take into account an analysis of ergonomic processes. An important modification for developers is SPIP's core that now benefits from a new compiler. It then becomes possible to elaborate skeletons with more complex functionalities without requiring any coding work in PHP.
Other re-workings are currently under way, such as the reworking of the private interface in the form of skeletons.
The 1.9 version (1) introduced a plug-in system and numerous changes, notably in the organisation of component files (particularly the transition from '.php3' to '.php' files extensions.
The 1.9.1 version introduced a template system, akin to Wikipedia.
The 1.9.2 version modified the directory structure to allow a better mutualisation of sources.
The 2.0 version supports multiple SQL databases, and introduces easy skeletons for web forms.
The 2.1 version builds on the concept of modules, along with improved security and stability, a new interface for plugins management, and other features.
The 3.0 major version was released on 19 May 2012:, completely redesigned towards a higher degree of modularity. All non-core functionalities are now implemented as plugins. The private area has been thoroughly rewritten in order to make the editorial objects as generic as possible. It's designed to be easier and quicker to create new editorial objects and to customize existing ones. The new DATA loop allows SPIP to connect to any kind of data (not only SQL tables). These data may be found locally (XML, CSV, YAML files, enumerations...) or directly on an URL (list of YouTube videos, Flickr photos, Google spreadsheets, online calendar...). So the web itself may be used as a database.
The 3.1 version was released on 6 January 2016. It provides updates of JavaScript libraries, default CSS styles, enhances the editorial space, provides new tools for writing skeletons, performance and writing code improvements.
The 3.2 version was released on 13 October 2017. It includes an update of embedded JavaScript libraries, better ergonomics of the private space as well as other improvements.
See also
Comparison of content management systems
List of collaborative software
List of applications with iCalendar support
Article notes and references
External links
: presentation, download, documentation, etc.
Detailed history of SPIP
Translate SPIP
Programming with SPIP 3.0
The SPIP galaxy
Plugins-SPIP
SPIP-Contrib
spip-en Mailing list for English language users
Free content management systems
Free software programmed in PHP |
32421682 | https://en.wikipedia.org/wiki/Project%20Harmony%20%28FOSS%20group%29 | Project Harmony (FOSS group) | Project Harmony is an initiative by Canonical Ltd. about contributor agreements for Open Source software. The aim of the Harmony project is to develop templates of Contributor License Agreements for use by Free and open source software (FOSS) projects.
The Canonical initiative was announced in June 2010 by Amanda Brock, General Counsel at Canonical. In July 2011, the project released version 1.0 of its agreements templates. Following this release, the project was seen by some as an important step for intellectual property and copyright management for open Source software, and by some others as "Making an exception (ie copyright aggregation) the norm".
Contributor agreement options
The project proposes two types of options for the Contributor License Agreements:
Copyright License: The contributors retain the copyright to their contribution.
Copyright Assignment: The contributors transfer the copyright of their contribution to the project.
See also
Free and open source software (FOSS)
Contributor License Agreement
Harmony CLA
References
External links
Project page
Software licenses
Copyright law
Intellectual property activism
Free and open-source software organizations |
27580000 | https://en.wikipedia.org/wiki/Libvpx | Libvpx | libvpx is a free software video codec library from Google and the Alliance for Open Media (AOMedia). It serves as the reference software implementation for the VP8 and VP9 video coding formats, and for AV1 a special fork named libaom that was stripped of backwards compatibility.
As free software it is published also in source code under the terms of the revised BSD license. It ships with the commandline tools vpxenc/aomenc and vpxdec/aomdec that build on its functionality.
History
libvpx originates from the video codec company On2 Technologies that sold its first software codec in mid-90s.
libvpx was released as free software by Google on May 19, 2010 after the acquisition of On2 Technologies for an estimate of over 120 million US dollars.
In June 2010, Google amended the VP8 codec software license to the 3-clause BSD license
after some contention over whether the original license was actually open source.
Google was criticised for dumping untidy code with bad documentation for the initial release of libvpx and developing behind closed doors without involving the community in the process.
The development process was opened after the release of VP9.
Preliminary support for VP9 was added to libvpx on June 17, 2013. It was officially introduced with the release of version 1.3 on December 2, which also supports lossless compression.
In April 2015, Google released a significant update to its libvpx library, with version 1.4.0 adding support for encoding VP9 with 10-bit and 12-bit bit depth, 4:2:2 and 4:4:4 chroma subsampling (VP9 profiles 1, 2, and 3), and VP9 multithreaded decoding/encoding.
Versions 1.5 (November 2015, 1.6 (July 2016), 1.7 (January 2018), and 1.8 (February 2019) delivered significant speedups, both for encoding and decoding.
Features
libvpx implements single-pass and two-pass encoding modes, with either bitrate or quality target settings.
libvpx offers an asymmetric codec – with encoding taking much longer than decoding – and options for configuring encoding expense independently from decoding complexity.
A lookahead of up to 25 frames can be configured, which improves compression efficiency but introduces latency and thereby hurts real-time performance.
libvpx includes a mode where the maximum CPU resources possible will be used while still keeping the encoding speed almost exactly equivalent to the playback speed (realtime), keeping the quality as high as possible without lag.
libvpx supports Rec. 601, Rec. 709, Rec. 2020, SMPTE-170, SMPTE-240, and sRGB color spaces.
Performance
At high resolutions (e.g., UHD) VP9 encoded by libvpx for VOD applications provides a significant improvement over H.264 encoded by x264. HEVC encoded by x265 may achieve even better quality, but the royalty-free nature of VP9 makes it a compelling option for delivering high resolution video on supported platforms.
Decoding performance is relatively slow, partially in order to keep the code base easier to maintain.
Compared to the initial release of libvpx, ffvp8 from the FFmpeg project improved performance by 22 to over 66%. In 2016, alternative VP9 decoders still achieved 25–50% faster decoding.
Technology
libvpx is written in C and assembly language. It does not have complete SIMD coverage as of 2015.
Usage
libvpx is used by major OTT video services including YouTube, Netflix, Amazon, JW Player, Brightcove, and Telestream, among which are the biggest sources of internet traffic with Netflix alone accounting for nearly a third of all internet traffic in the United States as of 2017.
There are alternatives for decoding VP8 and VP9, both commercial and closed source as well as open source. For encoding there are only commercial alternatives and some unfinished experimental software for VP8 including xvp8 as of 2016.
References
External links
Free video codecs
C (programming language) libraries
Free software programmed in C
Free computer libraries
Google software |
22048289 | https://en.wikipedia.org/wiki/Handle%20System | Handle System | The Handle System is the Corporation for National Research Initiatives's proprietary registry assigning persistent identifiers, or handles, to information resources, and for resolving "those handles into the information necessary to locate, access, and otherwise make use of the resources".
As with handles used elsewhere in computing, Handle System handles are opaque, and encode no information about the underlying resource, being bound only to metadata regarding the resource. Consequently, the handles are not rendered invalid by changes to the metadata.
The system was developed by Bob Kahn at the Corporation for National Research Initiatives (CNRI). The original work was funded by the Defense Advanced Research Projects Agency (DARPA) between 1992 and 1996, as part of a wider framework for distributed digital object services, and was thus contemporaneous with the early deployment of the World Wide Web, with similar goals.
The Handle System was first implemented in autumn 1994, and was administered and operated by CNRI until December 2015, when a new "multi-primary administrator" (MPA) mode of operation was introduced. The DONA Foundation now administers the system's Global Handle Registry and accredits MPAs, including CNRI and the International DOI Foundation.
The system currently provides the underlying infrastructure for such handle-based systems as Digital Object Identifiers and DSpace, which are mainly used to provide access to scholarly, professional and government documents and other information resources.
CNRI provides specifications and the source code for reference implementations for the servers and protocols used in the system under a royalty-free "Public License", similar to an open source license.
Thousands of handle services are currently running. Over 1000 of these are at universities and libraries, but they are also in operation at national laboratories, research groups, government agencies, and commercial enterprises, receiving over 200 million resolution requests per month.
Specifications
The Handle System is defined in informational RFCs 3650, 3651 and 3652 of the Internet Engineering Task Force (IETF); it includes an open set of protocols, a namespace, and a reference implementation of the protocols. Documentation, software, and related information is provided by CNRI on a dedicated website
Handles consist of a prefix which identifies a "naming authority" and a suffix which gives the "local name" of a resource. Similar to domain names, prefixes are issued to naming authorities by one of the "multi-primary administrators" of the system upon payment of a fee, which must be renewed annually. A naming authority may create any number of handles, with unique "local names", within their assigned prefixes. An example of a handle is:
20.1000/100
10.1000/182
In the first example, which is the handle for the HANDLE.NET software license, 20.1000 is the prefix assigned to the naming authority (in this case, Handle.net itself) and 100 is the local name within that namespace. The local name may consist of any characters from the Unicode UCS-2 character set. The prefix also consists of any UCS-2 characters, other than "/". The prefixes consist of one or more naming authority segments, separated by periods, representing a hierarchy of naming authorities. Thus, in the example 20 is the naming authority prefix for CNRI, while 1000 designates a subordinate naming authority within the 20 prefix. Other examples of top-level prefixes for the federated naming authorities of the DONA Foundation are 10 for DOI handles; 11 for handles assigned by the ITU; 21 for handles issued by the German Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen (GWDG), the scientific computing center of the University of Göttingen; and 86 for the Coalition of Handle Services – China. Older "legacy" prefixes issued by CNRI before the "multi-primary administrator" (MPA) structure was instituted are typically four of five digits, as in the second example above, a handle administered by the University of Leicester. All prefixes must be registered in the Global Handle Registry through an DONA Foundation approved registrar, normally for a fee.
As with other uses of handles in computing, the handle is opaque; that is, it encodes no information about the underlying resource and provides only the means to retrieve metadata about the resource.
This may be contrasted with a Uniform Resource Locator (URL), which may encode within the identifier such attributes of the resource as the protocol to be used to access the server holding the resource, the server host name and port number, and perhaps even location specifics such as the name of a file in the server file system containing the resource. In the Handle System, these specifics are not encoded in the handle, but are found in the metadata to which the handle is bound.
The metadata may include many attributes of the information resource, such as its locations, the forms in which it is available, the types of access (e.g. "free" versus "paid") offered, and to whom. The processing of the metadata to determine how and where the resource should be accessed, and the provision of the resource to the user, are performed in a separate step, called "resolution", using a Resolver, a server which may be different from the ones involved in exchanging the handle for the metadata. Unlike URLs, which may become invalid if the metadata embedded within them becomes invalid, handles do not become invalid and do not need to change when locations or other metadata attributes change. This helps to prevent link rot, as changes in the information resource (such as location) need only be reflected in changes to the metadata, rather than in changes in every reference to the resource.
Each handle may have its own administrator and administration of the handles can be done in a distributed environment, similar to DNS domain names. The name-to-value bindings may also be secured, both via signatures to verify the data and via challenge response to verify the transmission of the data, allowing handles to be used in trust management applications.
It is possible for the same underlying information resource to be associated with multiple handles, as when two university libraries generate handles (and therefore possibly different sets of metadata) for the same book.
The Handle System is compatible with the Domain Name System (DNS), but does not require it, unlike persistent identifiers such as PURLs or ARKs, which are similar to handles, but which utilise domain names. However, unlike these domain-name based approaches, handles do require a separate prefix registration process and handle servers separate from the domain name servers.
Handles can be used natively. or expressed as Uniform Resource Identifiers (URIs) through a namespace within the info URI scheme; for example, 20.1000/100 may be written as the URI, info:hdl/20.1000/100. Some Handle System namespaces, such as Digital Object Identifiers, are "info:" URI namespaces in their own right; for example, info:doi/10.1000/182 is another way of writing the handle for the current revision of the DOI Handbook as a URI.
Some Handle System namespaces define special presentation rules. For example, Digital Object Identifiers, which represent a high percentage of the extant handles, are usually presented with a "doi:" prefix: doi:10.1000/182.
Any Handle may be expressed as a Uniform Resource Locator (URL) through the use of the generic HTTP proxy server,:
https://hdl.handle.net/20.1000/100
Some Handle-based systems offer an HTTP proxy server that is intended for use with their own system such as:
https://doi.org/10.1000/182.
Implementation
Implementation of the Handle System consists of Local Handle Services, each of which is made up of one or more sites that provide the servers that store specific handles. The Global Handle Registry is a unique Local Handle Service which stores information on the prefixes (also known as naming authorities) within the Handle System and can be queried to find out where specific handles are stored on other Local Handle Services within this distributed system.
The Handle System website provides a series of implementation tools, notably the HANDLE.NET Software and HANDLE.NET Client Libraries. Handle clients can be embedded in end user software (e.g., a web browser) or in server software (e.g., a web server) and extensions are already available for Adobe Acrobat and Firefox.
Handle client software libraries are available in both C and Java. Some applications have developed specific add-on tools, e.g., for the DOI System.
The interoperable network of distributed handle resolver servers (also known as the Proxy Server System) are linked through a Global Resolver (which is one logical entity though physically decentralised and mirrored). Users of Handle System technology obtain a handle prefix created in the Global Handle Registry. The Global Handle Registry maintains and resolves the prefixes of locally maintained handle services. Any local handle service can, therefore, resolve any handle through the Global Resolver.
Handles (identifiers) are passed by a client, as a query of the naming authority/prefix, to the Handle System's Global Handle Registry (GHR). The GHR responds by sending the client the location information for the relevant Local Handle Service (which may consist of multiple servers in multiple sites); a query is then sent to the relevant server within the Local Handle Service. The Local Handle Service returns the information needed to acquire the resource, e.g., a URL which can then be turned into an HTTP re-direct. (Note: if the client already has information on the appropriate LHS to query, the initial query to GHR is omitted)
Though the original model from which the Handle System derives dealt with management of digital objects, the Handle System does not mandate any particular model of relationships between the identified entities, nor is it limited to identifying only digital objects: non-digital entities may be represented as a corresponding digital object for the purposes of digital object management. Some care is needed in the definition of such objects and how they relate to non-digital entities; there are established models that can aid in such definitions e.g., Functional Requirements for Bibliographic Records (FRBR), CIDOC CRM, and indecs content model. Some applications have found it helpful to marry such a framework to the handle application: for example, the Advanced Distributed Learning (ADL) Initiative brings together Handle System application with existing standards for distributed learning content, using a Shareable Content Object Reference Model (SCORM), and the Digital Object Identifier (DOI) system implementation of the Handle System has adopted it together with the indecs framework to deal with semantic interoperability.
The Handle System also makes explicit the importance of organizational commitment to a persistent identifier scheme, but does not mandate one model for ensuring such commitment. Individual applications may choose to establish their own sets of rules and social infrastructure to ensure persistence (e.g., when used in the DSpace application, and the DOI application).
Design principles
The Handle system is designed to meet the following requirements to contribute to persistence
The identifier string:
is not based on any changeable attributes of the entity (location, ownership, or any other attribute that may change without changing the referent's identity);
is opaque (preferably a ‘dumb number’: a well known pattern invites assumptions that may be misleading, and meaningful semantics may not translate across languages and may cause trademark conflicts);
is unique within the system (to avoid collisions and referential uncertainty);
has optional, but nice to have, features that should be supported (human-readable, cut-and-paste-able, embeddable; fits common systems, e.g., URI specification).
The identifier resolution mechanism:
is reliable (using redundancy, no single points of failure, and fast enough to not appear broken);
is scalable (higher loads simply managed with more computers);
is flexible (can adapt to changing computing environments; useful to new applications):
is trusted (both resolution and administration have technical trust methods; an operating organization is committed to the long term);
builds on open architecture (encouraging the leverage efforts of a community in building applications on the infrastructure);
is transparent (users need not know the infrastructure details).
Applications
Among the objects that are currently identified by handles are journal articles, technical reports, books, theses and dissertations, government documents, metadata, distributed learning content, and data sets. Handles are being used in digital watermarking applications, GRID applications, repositories, and more. Although individual users may download and use the HANDLE.NET software independently, many users have found it beneficial to collaborate in developing applications in a federation, using common policy or additional technology to provide shared services. As one of the first persistent identifier schemes, the Handle System has been widely adopted by public and private institutions and proven over several years. (See Paradigm, Persistent identifiers.)
Handle System applications may use handles as simple persistent identifiers (as most commonly used, to resolve to the current URL of an object), or may choose to take advantage of other features. Its support for the simultaneous return as output of multiple pieces of current information related to the object, in defined data structures, enables priorities to be established for the order in which the multiple resolutions will be used. Handles can, therefore, resolve to different digital versions of the same content, to mirror sites, or to different business models (pay vs. free, secure vs. open, public vs. private). They can also resolve to different digital versions of differing content, such as a mix of objects required for a distance-learning course.
There are thousands of handle services running today, located in 71 countries, on 6 continents; over 1000 of them run at universities and libraries. Handle services are being run by user federations, national laboratories, universities, computing centers, libraries (national and local), government agencies, contractors, corporations, and research groups. Major publishers use the Handle System for persistent identification of commercially traded and Open Access content through its implementation with the Digital Object Identifier (DOI) system.
The number of prefixes, which allow users to assign handles, is growing and stands at over 12,000 as of early 2014. There are six top-level Global Handle Registry servers that receive (on average) 68 million resolution requests per month. Proxy servers known to CNRI, passing requests to the system on the Web, receive (on average) 200 million resolution requests per month. (Statistics from Handle Quick Facts.)
In 2010, CNRI and ITU (International Telecommunication Union) entered into an agreement to collaborate on use of the Handle System (and the Digital Object Architecture more generally) and are working on the specific details of that collaboration; in April 2009 ITU listed the Handle System as an "emerging trend".
Licences and use policy
Handle System, HANDLE.NET and Global Handle Registry are trademarks of the Corporation for National Research Initiatives (CNRI), a non-profit research and development corporation in the USA. The Handle System is the subject of patents by CNRI, which licenses its Handle System technology through a public license, similar to an open source license, in order to enable broader use of the technology. Handle System infrastructure is supported by prefix registration and service fees, with the majority coming from single prefix holders. The largest current single contributor is the International DOI Foundation. The Public License allows commercial and non-commercial use at low cost of both its patented technology and the reference implementation of the software, and allows the software to be freely embedded in other systems and products. A Service Agreement is also available for users who intend to provide identifier and/or resolution services using the Handle System technology under the Handle System public license.
Related technologies
The Handle System represents several components of a long-term digital object architecture. In January 2010 CNRI released its general-purpose Digital Object Repository software, another major component of this architecture. More information about the release, including protocol specification, source code and ready-to-use system, clients and utilities, is available.
See also
Archival Resource Key (ARK)
Digital Library
Electronic Publishing
Hypertext
Institutional Repository
Linked Data
OpenURL
Permalink
Persistent URL
Resource Description Framework
Semantic Web
Uniform Resource Name
References
External links
Persistent identifiers project at Paradigm
Internet protocols
Identifiers
Computer-related introductions in 1994 |
222828 | https://en.wikipedia.org/wiki/Unit%20testing | Unit testing | In computer programming, unit testing is a software testing method by which individual units of source code—sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures—are tested to determine whether they are fit for use.
Description
Unit tests are typically automated tests written and run by software developers to ensure that a section of an application (known as the "unit") meets its design and behaves as intended. In procedural programming, a unit could be an entire module, but it is more commonly an individual function or procedure. In object-oriented programming, a unit is often an entire interface, such as a class, or an individual method. By writing tests first for the smallest testable units, then the compound behaviors between those, one can build up comprehensive tests for complex applications.
To isolate issues that may arise, each test case should be tested independently. Substitutes such as method stubs, mock objects, fakes, and test harnesses can be used to assist testing a module in isolation.
During development, a software developer may code criteria, or results that are known to be good, into the test to verify the unit's correctness. During test case execution, frameworks log tests that fail any criterion and report them in a summary. For this, the most commonly used approach is test - function - expected value.
Writing and maintaining unit tests can be made faster by using parameterized tests. These allow the execution of one test multiple times with different input sets, thus reducing test code duplication. Unlike traditional unit tests, which are usually closed methods and test invariant conditions, parameterized tests take any set of parameters. Parameterized tests are supported by TestNG, JUnit and its .Net counterpart, XUnit. Suitable parameters for the unit tests may be supplied manually or in some cases are automatically generated by the test framework. In recent years support was added for writing more powerful (unit) tests, leveraging the concept of theories, test cases that execute the same steps, but using test data generated at runtime, unlike regular parameterized tests that use the same execution steps with input sets that are pre-defined.
Advantages
The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. A unit test provides a strict, written contract that the piece of code must satisfy. As a result, it affords several benefits.
Unit testing finds problems early in the development cycle. This includes both bugs in the programmer's implementation and flaws or missing parts of the specification for the unit. The process of writing a thorough set of tests forces the author to think through inputs, outputs, and error conditions, and thus more crisply define the unit's desired behavior. The cost of finding a bug before coding begins or when the code is first written is considerably lower than the cost of detecting, identifying, and correcting the bug later. Bugs in released code may also cause costly problems for the end-users of the software. Code can be impossible or difficult to unit test if poorly written, thus unit testing can force developers to structure functions and objects in better ways.
In test-driven development (TDD), which is frequently used in both extreme programming and scrum, unit tests are created before the code itself is written. When the tests pass, that code is considered complete. The same unit tests are run against that function frequently as the larger code base is developed either as the code is changed or via an automated process with the build. If the unit tests fail, it is considered to be a bug either in the changed code or the tests themselves. The unit tests then allow the location of the fault or failure to be easily traced. Since the unit tests alert the development team of the problem before handing the code off to testers or clients, potential problems are caught early in the development process.
Unit testing allows the programmer to refactor code or upgrade system libraries at a later date, and make sure the module still works correctly (e.g., in regression testing). The procedure is to write test cases for all functions and methods so that whenever a change causes a fault, it can be quickly identified. Unit tests detect changes which may break a design contract.
Unit testing may reduce uncertainty in the units themselves and can be used in a bottom-up testing style approach. By testing the parts of a program first and then testing the sum of its parts, integration testing becomes much easier.
Unit testing provides a sort of living documentation of the system. Developers looking to learn what functionality is provided by a unit, and how to use it, can look at the unit tests to gain a basic understanding of the unit's interface (API).
Unit test cases embody characteristics that are critical to the success of the unit. These characteristics can indicate appropriate/inappropriate use of a unit as well as negative behaviors that are to be trapped by the unit. A unit test case, in and of itself, documents these critical characteristics, although many software development environments do not rely solely upon code to document the product in development.
When software is developed using a test-driven approach, the combination of writing the unit test to specify the interface plus the refactoring activities performed after the test has passed, may take the place of formal design. Each unit test can be seen as a design element specifying classes, methods, and observable behavior.
Limitations and disadvantages
Testing will not catch every error in the program, because it cannot evaluate every execution path in any but the most trivial programs. This problem is a superset of the halting problem, which is undecidable. The same is true for unit testing. Additionally, unit testing by definition only tests the functionality of the units themselves. Therefore, it will not catch integration errors or broader system-level errors (such as functions performed across multiple units, or non-functional test areas such as performance). Unit testing should be done in conjunction with other software testing activities, as they can only show the presence or absence of particular errors; they cannot prove a complete absence of errors. To guarantee correct behavior for every execution path and every possible input, and ensure the absence of errors, other techniques are required, namely the application of formal methods to proving that a software component has no unexpected behavior.
An elaborate hierarchy of unit tests does not equal integration testing. Integration with peripheral units should be included in integration tests, but not in unit tests. Integration testing typically still relies heavily on humans testing manually; high-level or global-scope testing can be difficult to automate, such that manual testing often appears faster and cheaper.
Software testing is a combinatorial problem. For example, every Boolean decision statement requires at least two tests: one with an outcome of "true" and one with an outcome of "false". As a result, for every line of code written, programmers often need 3 to 5 lines of test code. This obviously takes time and its investment may not be worth the effort. There are problems that cannot easily be tested at all – for example those that are nondeterministic or involve multiple threads. In addition, code for a unit test is as likely to be buggy as the code it is testing. Fred Brooks in The Mythical Man-Month quotes: "Never go to sea with two chronometers; take one or three." Meaning, if two chronometers contradict, how do you know which one is correct?
Another challenge related to writing the unit tests is the difficulty of setting up realistic and useful tests. It is necessary to create relevant initial conditions so the part of the application being tested behaves like part of the complete system. If these initial conditions are not set correctly, the test will not be exercising the code in a realistic context, which diminishes the value and accuracy of unit test results.
To obtain the intended benefits from unit testing, rigorous discipline is needed throughout the software development process. It is essential to keep careful records not only of the tests that have been performed, but also of all changes that have been made to the source code of this or any other unit in the software. Use of a version control system is essential. If a later version of the unit fails a particular test that it had previously passed, the version-control software can provide a list of the source code changes (if any) that have been applied to the unit since that time.
It is also essential to implement a sustainable process for ensuring that test case failures are reviewed regularly and addressed immediately. If such a process is not implemented and ingrained into the team's workflow, the application will evolve out of sync with the unit test suite, increasing false positives and reducing the effectiveness of the test suite.
Unit testing embedded system software presents a unique challenge: Because the software is being developed on a different platform than the one it will eventually run on, you cannot readily run a test program in the actual deployment environment, as is possible with desktop programs.
Unit tests tend to be easiest when a method has input parameters and some output. It is not as easy to create unit tests when a major function of the method is to interact with something external to the application. For example, a method that will work with a database might require a mock up of database interactions to be created, which probably won't be as comprehensive as the real database interactions.
Example
Here is a set of test cases in Java that specify a number of elements of the implementation. First, that there must be an interface called Adder, and an implementing class with a zero-argument constructor called AdderImpl. It goes on to assert that the Adder interface should have a method called add, with two integer parameters, which returns another integer. It also specifies the behaviour of this method for a small range of values over a number of test methods.
import static org.junit.Assert.assertEquals;
import org.junit.Test;
public class TestAdder {
@Test
public void testSumPositiveNumbersOneAndOne() {
Adder adder = new AdderImpl();
assertEquals(2, adder.add(1, 1));
}
// can it add the positive numbers 1 and 2?
@Test
public void testSumPositiveNumbersOneAndTwo() {
Adder adder = new AdderImpl();
assertEquals(3, adder.add(1, 2));
}
// can it add the positive numbers 2 and 2?
@Test
public void testSumPositiveNumbersTwoAndTwo() {
Adder adder = new AdderImpl();
assertEquals(4, adder.add(2, 2));
}
// is zero neutral?
@Test
public void testSumZeroNeutral() {
Adder adder = new AdderImpl();
assertEquals(0, adder.add(0, 0));
}
// can it add the negative numbers -1 and -2?
@Test
public void testSumNegativeNumbers() {
Adder adder = new AdderImpl();
assertEquals(-3, adder.add(-1, -2));
}
// can it add a positive and a negative?
@Test
public void testSumPositiveAndNegative() {
Adder adder = new AdderImpl();
assertEquals(0, adder.add(-1, 1));
}
// how about larger numbers?
@Test
public void testSumLargeNumbers() {
Adder adder = new AdderImpl();
assertEquals(2222, adder.add(1234, 988));
}
}
In this case the unit tests, having been written first, act as a design document specifying the form and behaviour of a desired solution, but not the implementation details, which are left for the programmer. Following the "do the simplest thing that could possibly work" practice, the easiest solution that will make the test pass is shown below.
interface Adder {
int add(int a, int b);
}
class AdderImpl implements Adder {
public int add(int a, int b) {
return a + b;
}
}
As executable specifications
Using unit-tests as a design specification has one significant advantage over other design methods: The design document (the unit-tests themselves) can itself be used to verify the implementation. The tests will never pass unless the developer implements a solution according to the design.
Unit testing lacks some of the accessibility of a diagrammatic specification such as a UML diagram, but they may be generated from the unit test using automated tools. Most modern languages have free tools (usually available as extensions to IDEs). Free tools, like those based on the xUnit framework, outsource to another system the graphical rendering of a view for human consumption.
Applications
Extreme programming
Unit testing is the cornerstone of extreme programming, which relies on an automated unit testing framework. This automated unit testing framework can be either third party, e.g., xUnit, or created within the development group.
Extreme programming uses the creation of unit tests for test-driven development. The developer writes a unit test that exposes either a software requirement or a defect. This test will fail because either the requirement isn't implemented yet, or because it intentionally exposes a defect in the existing code. Then, the developer writes the simplest code to make the test, along with other tests, pass.
Most code in a system is unit tested, but not necessarily all paths through the code. Extreme programming mandates a "test everything that can possibly break" strategy, over the traditional "test every execution path" method. This leads developers to develop fewer tests than classical methods, but this isn't really a problem, more a restatement of fact, as classical methods have rarely ever been followed methodically enough for all execution paths to have been thoroughly tested. Extreme programming simply recognizes that testing is rarely exhaustive (because it is often too expensive and time-consuming to be economically viable) and provides guidance on how to effectively focus limited resources.
Crucially, the test code is considered a first class project artifact in that it is maintained at the same quality as the implementation code, with all duplication removed. Developers release unit testing code to the code repository in conjunction with the code it tests. Extreme programming's thorough unit testing allows the benefits mentioned above, such as simpler and more confident code development and refactoring, simplified code integration, accurate documentation, and more modular designs. These unit tests are also constantly run as a form of regression test.
Unit testing is also critical to the concept of Emergent Design. As emergent design is heavily dependent upon refactoring, unit tests are an integral component.
Unit testing frameworks
Unit testing frameworks are most often third-party products that are not distributed as part of the compiler suite. They help simplify the process of unit testing, having been developed for a wide variety of languages.
It is generally possible to perform unit testing without the support of a specific framework by writing client code that exercises the units under test and uses assertions, exception handling, or other control flow mechanisms to signal failure. Unit testing without a framework is valuable in that there is a barrier to entry for the adoption of unit testing; having scant unit tests is hardly better than having none at all, whereas once a framework is in place, adding unit tests becomes relatively easy. In some frameworks many advanced unit test features are missing or must be hand-coded.
Language-level unit testing support
Some programming languages directly support unit testing. Their grammar allows the direct declaration of unit tests without importing a library (whether third party or standard). Additionally, the boolean conditions of the unit tests can be expressed in the same syntax as boolean expressions used in non-unit test code, such as what is used for and statements.
Languages with built-in unit testing support include:
Apex
Cobra
Crystal
D
Go
LabVIEW
MATLAB
Python
Racket
Ruby
Rust
Some languages without built-in unit-testing support have very good unit testing libraries/frameworks. Those languages include:
ABAP
C++
C#
Clojure
Elixir
Java
JavaScript
Objective-C
Perl
PHP
PowerShell
R with testthat
Scala
tcl
Visual Basic .NET
Xojo with XojoUnit
See also
Acceptance testing
Characterization test
Component-based usability testing
Design predicates
Design by contract
Extreme programming
Functional testing
Integration testing
List of unit testing frameworks
Regression testing
Software archaeology
Software testing
Test case
Test-driven development
xUnit – a family of unit testing frameworks.
References
Further reading
External links
Test Driven Development (Ward Cunningham's Wiki)
Extreme programming
Articles with example Java code
Types of tools used in software development |
44804934 | https://en.wikipedia.org/wiki/Swatantra%202014 | Swatantra 2014 | Swatantra 2014 (from the Indic word Swatantra meaning 'independent', or 'free' as in 'free will') was the fifth international free software conference organized by the International Centre for Free and Open Source Software (ICFOSS), an autonomous organization set up by the Government of Kerala, India for the propagation of FOSS. It was held in Thiruvananthapuram, Kerala, India during 18–20 December 2014. Among supporting organizations of the conference were the Free Software Foundation of India, Centre for Internet and Society (India), Software Freedom Law Center (India) and Swathantra Malayalam Computing.
Objective
According to Satish Babu, Director, ICFOSS, free software is capable of offering a freedom-enhancing, robust and reliable alternative, with additional economic advantages, compared to proprietary software, and therefore that free software could find application in the public and private sector organizations in the field of, inter alia, education, arts, and culture.
Event
The theme of the event was "Free Software for a Free World". Over 200 delegates attended the conference. The inaugural speech was delivered by Richard Stallman, founder of the free software movement who was of the view that this software should enable access without compromising the security of one's identity. He also told that cameras installed on streets was a threat to the privacy of the public.
Other than Stallman, notable personalities like Smári McCarthy and Nina Paley attended the event.
Prof. Rahul De of IIM Bangalore, a speaker at the event, reported during his presentation that over could be saved in India, if free software was used for ICT in Education in the 320,000 schools across the country.
Sessions
The following parallel sessions were held:
Indian Language Computing
Wikipedia/Wikimedia activities
Computational Biology & Sciences
Free Culture
Freedom on the Cloud
Free Mobile Platforms
Education & Spoken Tutorials
Surveillance, security and privacy & Internet Governance
Mapping & OpenStreetMaps
Computing for the Differently-abled
Free Software in e-Governance
Open Hardware & IoT
Supporting organizations
The following are the organizations that supported the event:
Centre for Internet and Society
SFLC.IN, Delhi
Swathanthra Malayalam Computing
FOSSEE, IIT-Bombay
SPACE, Thiruvananthapuram
Department of Computational Biology and Bioinformatics, Kerala University, Thiruvananthapuram
Spoken Tutorials, IIT-Bombay
IEEE Kerala Section
References
External links
2014 in India
Free software culture and documents
Software industry in India
Science and technology in Thiruvananthapuram |
24913905 | https://en.wikipedia.org/wiki/Prothous | Prothous | In Greek mythology, Prothous (Ancient Greek: Πρόθοος Prothoös) may refer to:
Prothous, an Arcadian prince as one of the 50 sons of the impious King Lycaon either by the naiad Cyllene, Nonacris or by unknown woman. He and his brothers were the most nefarious and carefree of all people. To test them, Zeus visited them in the form of a peasant. These brothers mixed the entrails of a child into the god's meal, whereupon the enraged Zeus threw the meal over the table. Aegaeon was killed, along with his brothers and their father, by a lightning bolt of the god.
Prothous, son of Thestius and brother of Althaea. He was one of the Calydonian Boar Hunters.
Prothous, son of the Aetolian Agrius, killed by Diomedes.
Prothous of Argos, a warrior in the army of the Seven against Thebes. He cast lots to assign places in the chariot race at the funeral games of Opheltes.
Prothous, a defender of Thebes against the Seven, killed by Tydeus.
Prothous, son of Tenthredon and either Eurymache or Cleobule the daughter of Eurytus. He was one of the commander of the Magnetes who dwelt around mount Pelion and the river Peneus and one of the Greek leaders in the Trojan War. Prothous brought forty ships to Troy. According to one version, Prothous, together with Meges and a number of others, died as a result of a shipwreck near Cape Caphereus of Euboea; in another version, Prothous, Eurypylus and Guneus ended up in Libya and settled there.
Prothous, one of the Suitors of Penelope who came from Same along with other 22 wooers. He, with the other suitors, was killed by Odysseus with the aid of Eumaeus, Philoetius, and Telemachus.
Notes
References
Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. . Online version at the Perseus Digital Library. Greek text available from the same website.
Conon, Fifty Narrations, surviving as one-paragraph summaries in the Bibliotheca (Library) of Photius, Patriarch of Constantinople translated from the Greek by Brady Kiesling. Online version at the Topos Text Project.
Dictys Cretensis, from The Trojan War. The Chronicles of Dictys of Crete and Dares the Phrygian translated by Richard McIlwaine Frazer, Jr. (1931-). Indiana University Press. 1966. Online version at the Topos Text Project.
Gaius Julius Hyginus, Fabulae from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project.
Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library.
Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. Greek text available at the Perseus Digital Library.
Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library
Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library.
Publius Papinius Statius, The Thebaid translated by John Henry Mozley. Loeb Classical Library Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1928. Online version at the Topos Text Project.
Publius Papinius Statius, The Thebaid. Vol I-II. John Henry Mozley. London: William Heinemann; New York: G.P. Putnam's Sons. 1928. Latin text available at the Perseus Digital Library.
Tzetzes, John, Allegories of the Iliad translated by Goldwyn, Adam J. and Kokkini, Dimitra. Dumbarton Oaks Medieval Library, Harvard University Press, 2015.
Princes in Greek mythology
Sons of Lycaon
Achaean Leaders
Thessalians in the Trojan War
People of the Trojan War
Suitors of Penelope
Aetolian characters in Greek mythology
Argive characters in Greek mythology
Theban characters in Greek mythology
Characters in Greek mythology
Ancient Magnesia
Arcadian mythology |
3771867 | https://en.wikipedia.org/wiki/Type%20enforcement | Type enforcement | The concept of type enforcement (TE), in the field of information technology, is an access control mechanism for regulating access in computer systems. Implementing TE gives priority to mandatory access control (MAC) over discretionary access control (DAC). Access clearance is first given to a subject (e.g. process) accessing objects (e.g. files, records, messages) based on rules defined in an attached security context. A security context in a domain is defined by a domain security policy. In the Linux security module (LSM) in SELinux, the security context is an extended attribute. Type enforcement implementation is a prerequisite for MAC, and a first step before multilevel security (MLS) or its replacement multi categories security (MCS). It is a complement of role-based access control (RBAC).
Control
Type enforcement implies fine-grained control over the operating system, not only to have control over process execution, but also over domain transition or authorization scheme. This is why it is best implemented as a kernel module, as is the case with SELinux. Using type enforcement is a way to implement the FLASK architecture.
Access
Using type enforcement, users may (as in Microsoft Active Directory) or may not (as in SELinux) be associated with a Kerberos realm, although the original type enforcement model implies so. It is always necessary to define a TE access matrix containing rules about clearance granted to a given security context, or subject's rights over objects according to an authorization scheme.
Security
Practically, type enforcement evaluates a set of rules from the source security context of a subject, against a set of rules from the target security context of the object. A clearance decision occurs depending on the TE access description (matrix). Then, DAC or other access control mechanisms (MLS / MCS, ...) apply.
History
Type enforcement was introduced in the Secure Ada Target architecture in the late 1980s with a full implementation developed in the Logical Coprocessing Kernel (LOCK) system. The Sidewinder Internet Firewall was implemented on a custom version of Unix that incorporated type enforcement.
A variant called domain type enforcement was developed in the Trusted MACH system.
The original type enforcement model stated that labels should be attached to subject and object: a “domain label” for a subject and a “type label” for an object. This implementation mechanism was improved by the FLASK architecture, substituting complex structures and implicit relationship. Also, the original TE access matrix was extended to other structures: lattice-based, history-based, environment-based, policy logic... This is a matter of implementation of TE by the various operating systems. In SELinux, TE implementation does not internally distinguish TE-domain from TE-types. It should be considered a weakness of TE original model to specify detailed implementation aspects such as labels and matrix, especially using the terms “domain” and “types” which have other, more generic, widely accepted meanings.
References
P. A. Loscocco, S. D. Smalley, P. A. Muckelbauer, R. C. Taylor, S. J. Turner, and J. F. Farrell. The Inevitability of Failure: The Flawed Assumption of Security in Modern Computing Environments. In Proceedings of the 21st National Information Systems Security Conference, pages 303–314, October 1998.
L. Badger, D. F. Sterne, D. L. Sherman, K. M. Walker and S. A. Haghighat, A Domain and Type Enforcement UNIX Prototype, In Proceedings of the 5th USENIX UNIX Security Symposium, June 1995.
W. E. Boebert and R. Y. Kain, A Practical Alternative to Hierarchical Integrity Policies, In Proceedings of the 8th National Computer Security Conference, page 18, 1985.
LOCK - A trusted computing system
Operating system security
Computer security models |
68847069 | https://en.wikipedia.org/wiki/Camille%20Stewart | Camille Stewart | Camille Stewart is an American technology and cybersecurity attorney, public speaker, and entrepreneur. She served as the Senior Policy advisor for the U.S. Department of Homeland Security under the Obama Administration from 2015 to 2017 under the Barack Obama Administration. She now serves as the Head of Security Policy & Election Integrity, Google Play & Android at Google.
Early life and education
With her father being a computer scientist, she became interested and pulled towards the knowledge of technology. She also had a strong passion for law as a child and knew she would become a lawyer. In fact, Camille began her career early by having her parents sign contracts when they would make promises. She graduated from Miami University with a Bachelor of Science degree in business, and later attended American University Washington College of Law to earn her Juris Doctorate degree. In 2020 she was selected to be a part of the Harvard Kennedy School Belfer Center for Science and International Affairs Cybersecurity Fellowship.
Career and professional life
While in law school, she studied intellectual property protection, theft, and abuse online. Upon graduating, she worked for Cyveillance, a cyber threat intelligence company. Camille also spent time on Capitol Hill as a Legal Fellow for Representative. Marcia Fudge and Rep. Emanuel Cleaver II Congressional Black Caucus.
In 2015 she appointed by the Obama Administration as the Senior Policy advisor for the U.S. Department of Homeland Security. Her experience in that role empowered her to work towards advancing former president Barack Obama's cybersecurity vision, especially with respect to methods used by Chinese companies to acquire American assets without review by Committee on Foreign Investment in the United States.
In 2021, Stewart was named the head of security policy for Google Play & Android. She co-founded #ShareTheMicInCyber which aims to focus on the role of Black people in cybersecurity. A 2020 op-ed piece on CNN by Stewart and Michèle Flournoy was cited by the New York Times in an article citing the need for a more diverse set of views in venues ranging from board rooms to national security. According to Politico, Stewart joined New America as a fellow in 2022. Stewart also works with the public to increase awareness about cybersecurity and the need for increased diversity in the field, and tools people can use to avoid computer scams.
Stewart is the founder of the legal consultancy and startup incubator, MarqueLaw, PLLC, and TheDigitalCounselor.com blog which develops and promotes forward-thinking solutions and leaders in cybersecurity. She currently serves on the board of directors for the International Foundation for Electoral Systems, GirlSecurity, and on the advisory board for Women of Color Advancing Peace, Security, & Conflict Transformation.
Honors and awards
In 2016, she received the Leadership Awards Rising Star award from Women in Technology. In 2019 she was named woman of the year in the 'barrier breaker' category of the Cyber Security Women awards, and was honored by New America and The Diversity in National Security Network for her contributions to national security and foreign policy. In 2021, The Root magazine named her one of the 100 most influential African Americans of 2021.
References
People associated with computer security
Google employees
Year of birth missing (living people)
Living people |
51910 | https://en.wikipedia.org/wiki/Quantum%20key%20distribution | Quantum key distribution | Quantum key distribution (QKD) is a secure communication method which implements a cryptographic protocol involving components of quantum mechanics. It enables two parties to produce a shared random secret key known only to them, which can then be used to encrypt and decrypt messages. It is often incorrectly called quantum cryptography, as it is the best-known example of a quantum cryptographic task.
An important and unique property of quantum key distribution is the ability of the two communicating users to detect the presence of any third party trying to gain knowledge of the key. This results from a fundamental aspect of quantum mechanics: the process of measuring a quantum system in general disturbs the system. A third party trying to eavesdrop on the key must in some way measure it, thus introducing detectable anomalies. By using quantum superpositions or quantum entanglement and transmitting information in quantum states, a communication system can be implemented that detects eavesdropping. If the level of eavesdropping is below a certain threshold, a key can be produced that is guaranteed to be secure (i.e., the eavesdropper has no information about it), otherwise no secure key is possible and communication is aborted.
The security of encryption that uses quantum key distribution relies on the foundations of quantum mechanics, in contrast to traditional public key cryptography, which relies on the computational difficulty of certain mathematical functions, and cannot provide any mathematical proof as to the actual complexity of reversing the one-way functions used. QKD has provable security based on information theory, and forward secrecy.
The main drawback of quantum key distribution is that it usually relies on having an authenticated classical channel of communications. In modern cryptography, having an authenticated classical channel means that one has either already exchanged a symmetric key of sufficient length or public keys of sufficient security level. With such information already available, in practice one can achieve authenticated and sufficiently secure communications without using QKD, such as by using the Galois/Counter Mode of the Advanced Encryption Standard. Thus QKD does the work of a stream cipher at many times the cost. Noted security expert Bruce Schneier remarked that quantum key distribution is "as useless as it is expensive".
Quantum key distribution is only used to produce and distribute a key, not to transmit any message data. This key can then be used with any chosen encryption algorithm to encrypt (and decrypt) a message, which can then be transmitted over a standard communication channel. The algorithm most commonly associated with QKD is the one-time pad, as it is provably secure when used with a secret, random key. In real-world situations, it is often also used with encryption using symmetric key algorithms like the Advanced Encryption Standard algorithm.
Quantum key exchange
Quantum communication involves encoding information in quantum states, or qubits, as opposed to classical communication's use of bits. Usually, photons are used for these quantum states. Quantum key distribution exploits certain properties of these quantum states to ensure its security. There are several different approaches to quantum key distribution, but they can be divided into two main categories depending on which property they exploit.
Prepare and measure protocols In contrast to classical physics, the act of measurement is an integral part of quantum mechanics. In general, measuring an unknown quantum state changes that state in some way. This is a consequence of quantum indeterminacy and can be exploited in order to detect any eavesdropping on communication (which necessarily involves measurement) and, more importantly, to calculate the amount of information that has been intercepted.
Entanglement based protocols The quantum states of two (or more) separate objects can become linked together in such a way that they must be described by a combined quantum state, not as individual objects. This is known as entanglement and means that, for example, performing a measurement on one object affects the other. If an entangled pair of objects is shared between two parties, anyone intercepting either object alters the overall system, revealing the presence of the third party (and the amount of information they have gained).
These two approaches can each be further divided into three families of protocols: discrete variable, continuous variable and distributed phase reference coding. Discrete variable protocols were the first to be invented, and they remain the most widely implemented. The other two families are mainly concerned with overcoming practical limitations of experiments. The two protocols described below both use discrete variable coding.
BB84 protocol: Charles H. Bennett and Gilles Brassard (1984)
This protocol, known as BB84 after its inventors and year of publication, was originally described using photon polarization states to transmit the information. However, any two pairs of conjugate states can be used for the protocol, and many optical-fibre-based implementations described as BB84 use phase encoded states. The sender (traditionally referred to as Alice) and the receiver (Bob) are connected by a quantum communication channel which allows quantum states to be transmitted. In the case of photons this channel is generally either an optical fibre or simply free space. In addition they communicate via a public classical channel, for example using broadcast radio or the internet. The protocol is designed with the assumption that an eavesdropper (referred to as Eve) can interfere in any way with the quantum channel, while the classical channel needs to be authenticated.
The security of the protocol comes from encoding the information in non-orthogonal states. Quantum indeterminacy means that these states cannot in general be measured without disturbing the original state (see No-cloning theorem). BB84 uses two pairs of states, with each pair conjugate to the other pair, and the two states within a pair orthogonal to each other. Pairs of orthogonal states are referred to as a basis. The usual polarization state pairs used are either the rectilinear basis of vertical (0°) and horizontal (90°), the diagonal basis of 45° and 135° or the circular basis of left- and right-handedness. Any two of these bases are conjugate to each other, and so any two can be used in the protocol. Below the rectilinear and diagonal bases are used.
The first step in BB84 is quantum transmission. Alice creates a random bit (0 or 1) and then randomly selects one of her two bases (rectilinear or diagonal in this case) to transmit it in. She then prepares a photon polarization state depending both on the bit value and basis, as shown in the adjacent table. So for example a 0 is encoded in the rectilinear basis (+) as a vertical polarization state, and a 1 is encoded in the diagonal basis (x) as a 135° state. Alice then transmits a single photon in the state specified to Bob, using the quantum channel. This process is then repeated from the random bit stage, with Alice recording the state, basis and time of each photon sent.
According to quantum mechanics (particularly quantum indeterminacy), no possible measurement distinguishes between the 4 different polarization states, as they are not all orthogonal. The only possible measurement is between any two orthogonal states (an orthonormal basis). So, for example, measuring in the rectilinear basis gives a result of horizontal or vertical. If the photon was created as horizontal or vertical (as a rectilinear eigenstate) then this measures the correct state, but if it was created as 45° or 135° (diagonal eigenstates) then the rectilinear measurement instead returns either horizontal or vertical at random. Furthermore, after this measurement the photon is polarized in the state it was measured in (horizontal or vertical), with all information about its initial polarization lost.
As Bob does not know the basis the photons were encoded in, all he can do is to select a basis at random to measure in, either rectilinear or diagonal. He does this for each photon he receives, recording the time, measurement basis used and measurement result. After Bob has measured all the photons, he communicates with Alice over the public classical channel. Alice broadcasts the basis each photon was sent in, and Bob the basis each was measured in. They both discard photon measurements (bits) where Bob used a different basis, which is half on average, leaving half the bits as a shared key.
To check for the presence of an eavesdropper, Alice and Bob now compare a predetermined subset of their remaining bit strings. If a third party (usually referred to as Eve, for "eavesdropper") has gained any information about the photons' polarization, this introduces errors in Bob's measurements. Other environmental conditions can cause errors in a similar fashion. If more than bits differ they abort the key and try again, possibly with a different quantum channel, as the security of the key cannot be guaranteed. is chosen so that if the number of bits known to Eve is less than this, privacy amplification can be used to reduce Eve's knowledge of the key to an arbitrarily small amount at the cost of reducing the length of the key.
E91 protocol: Artur Ekert (1991)
Artur Ekert's scheme uses entangled pairs of photons. These can be created by Alice, by Bob, or by some source separate from both of them, including eavesdropper Eve. The photons are distributed so that Alice and Bob each end up with one photon from each pair.
The scheme relies on two properties of entanglement. First, the entangled states are perfectly correlated in the sense that if Alice and Bob both measure whether their particles have vertical or horizontal polarizations, they always get the same answer with 100% probability. The same is true if they both measure any other pair of complementary (orthogonal) polarizations. This necessitates that the two distant parties have exact directionality synchronization. However, the particular results are completely random; it is impossible for Alice to predict if she (and thus Bob) will get vertical polarization or horizontal polarization. Second, any attempt at eavesdropping by Eve destroys these correlations in a way that Alice and Bob can detect.
Similarly to BB84, the protocol involves a private measurement protocol before detecting the presence of Eve. The measurement stage involves Alice measuring each photon she receives using some basis from the set while Bob chooses from where is the basis rotated by . They keep their series of basis choices private until measurements are completed. Two groups of photons are made: the first consists of photons measured using the same basis by Alice and Bob while the second contains all other photons. To detect eavesdropping, they can compute the test statistic using the correlation coefficients between Alice's bases and Bob's similar to that shown in the Bell test experiments. Maximally entangled photons would result in . If this were not the case, then Alice and Bob can conclude Eve has introduced local realism to the system, violating Bell's Theorem. If the protocol is successful, the first group can be used to generate keys since those photons are completely anti-aligned between Alice and Bob.
Information reconciliation and privacy amplification
The quantum key distribution protocols described above provide Alice and Bob with nearly identical shared keys, and also with an estimate of the discrepancy between the keys. These differences can be caused by eavesdropping, but also by imperfections in the transmission line and detectors. As it is impossible to distinguish between these two types of errors, guaranteed security requires the assumption that all errors are due to eavesdropping. Provided the error rate between the keys is lower than a certain threshold (27.6% as of 2002), two steps can be performed to first remove the erroneous bits and then reduce Eve's knowledge of the key to an arbitrary small value. These two steps are known as information reconciliation and privacy amplification respectively, and were first described in 1992.
Information reconciliation is a form of error correction carried out between Alice and Bob's keys, in order to ensure both keys are identical. It is conducted over the public channel and as such it is vital to minimise the information sent about each key, as this can be read by Eve. A common protocol used for information reconciliation is the cascade protocol, proposed in 1994. This operates in several rounds, with both keys divided into blocks in each round and the parity of those blocks compared. If a difference in parity is found then a binary search is performed to find and correct the error. If an error is found in a block from a previous round that had correct parity then another error must be contained in that block; this error is found and corrected as before. This process is repeated recursively, which is the source of the cascade name. After all blocks have been compared, Alice and Bob both reorder their keys in the same random way, and a new round begins. At the end of multiple rounds Alice and Bob have identical keys with high probability; however, Eve has additional information about the key from the parity information exchanged. However, from a coding theory point of view information reconciliation is essentially source coding with side information, in consequence any coding scheme that works for this problem can be used for information reconciliation. Lately turbocodes, LDPC codes and polar codes have been used for this purpose improving the efficiency of the cascade protocol.
Privacy amplification is a method for reducing (and effectively eliminating) Eve's partial information about Alice and Bob's key. This partial information could have been gained both by eavesdropping on the quantum channel during key transmission (thus introducing detectable errors), and on the public channel during information reconciliation (where it is assumed Eve gains all possible parity information). Privacy amplification uses Alice and Bob's key to produce a new, shorter key, in such a way that Eve has only negligible information about the new key. This can be done using a universal hash function, chosen at random from a publicly known set of such functions, which takes as its input a binary string of length equal to the key and outputs a binary string of a chosen shorter length. The amount by which this new key is shortened is calculated, based on how much information Eve could have gained about the old key (which is known due to the errors this would introduce), in order to reduce the probability of Eve having any knowledge of the new key to a very low value.
Implementations
Experimental
In 2008, exchange of secure keys at 1 Mbit/s (over 20 km of optical fibre) and 10 kbit/s (over 100 km of fibre), was achieved by a collaboration between the University of Cambridge and Toshiba using the BB84 protocol with decoy state pulses.
In 2007, Los Alamos National Laboratory/NIST achieved quantum key distribution over a 148.7 km of optic fibre using the BB84 protocol. Significantly, this distance is long enough for almost all the spans found in today's fibre networks. A European collaboration achieved free space QKD over 144 km between two of the Canary Islands using entangled photons (the Ekert scheme) in 2006, and using BB84 enhanced with decoy states in 2007.
the longest distance for optical fiber (307 km) was achieved by University of Geneva and Corning Inc. In the same experiment, a secret key rate of 12.7 kbit/s was generated, making it the highest bit rate system over distances of 100 km. In 2016 a team from Corning and various institutions in China achieved a distance of 404 km, but at a bit rate too slow to be practical.
In June 2017, physicists led by Thomas Jennewein at the Institute for Quantum Computing and the University of Waterloo in Waterloo, Canada achieved the first demonstration of quantum key distribution from a ground transmitter to a moving aircraft. They reported optical links with distances between 3–10 km and generated secure keys up to 868 kilobytes in length.
Also in June 2017, as part of the Quantum Experiments at Space Scale project, Chinese physicists led by Pan Jianwei at the University of Science and Technology of China measured entangled photons over a distance of 1203 km between two ground stations, laying the groundwork for future intercontinental quantum key distribution experiments. Photons were sent from one ground station to the satellite they had named Micius and back down to another ground station, where they "observed a survival of two-photon entanglement and a violation of Bell inequality by 2.37 ± 0.09 under strict Einstein locality conditions" along a "summed length varying from 1600 to 2400 kilometers." Later that year BB84 was successfully implemented over satellite links from Micius to ground stations in China and Austria. The keys were combined and the result was used to transmit images and video between Beijing, China, and Vienna, Austria.
In May 2019 a group led by Hong Guo at Peking University and Beijing University of Posts and Telecommunications reported field tests of a continuous-variable QKD system through commercial fiber networks in Xi'an and Guangzhou over distances of 30.02 km (12.48 dB) and 49.85 km (11.62 dB) respectively.
In December 2020, Indian Defence Research and Development Organisation tested a QKD between two of its laboratories in Hyderabad facility. The setup also demonstrated the validation of detection of a third party trying to gain knowledge of the communication. Quantum based security against eavesdropping was validated for the deployed system at over range and 10 dB attenuation over fibre optic channel. A continuous wave laser source was used to generate photons without depolarization effect and timing accuracy employed in the setup was of the order of picoseconds. The Single photon avalanche detector (SPAD) recorded arrival of photons and key rate was achieved in the range of kbps with low Quantum bit error rate.
In March 2021, Indian Space Research Organisation also demonstrated a free-space Quantum Communication over a distance of 300 meters. A free-space QKD was demonstrated at Space Applications Centre (SAC), Ahmedabad, between two line-of-sight buildings within the campus for video conferencing by quantum-key encrypted signals. The experiment utilised a NAVIC receiver for time synchronization between the transmitter and receiver modules. Later in January 2022, Indian scientists were able to successfully create an atmospheric channel for exchange of crypted messages and images. After demonstrating quantum communication between two ground stations, India has plans to develop Satellite Based Quantum Communication (SBQC).
Commercial
There are currently six companies offering commercial quantum key distribution systems around the world; ID Quantique (Geneva), MagiQ Technologies, Inc. (New York), QNu Labs (Bengaluru, India), QuintessenceLabs (Australia), QRate (Russia) and SeQureNet (Paris). Several other companies also have active research programs, including Toshiba, HP, IBM, Mitsubishi, NEC and NTT (See External links for direct research links).
In 2004, the world's first bank transfer using quantum key distribution was carried out in Vienna, Austria. Quantum encryption technology provided by the Swiss company Id Quantique was used in the Swiss canton (state) of Geneva to transmit ballot results to the capital in the national election occurring on 21 October 2007. In 2013, Battelle Memorial Institute installed a QKD system built by ID Quantique between their main campus in Columbus, Ohio and their manufacturing facility in nearby Dublin. Field tests of Tokyo QKD network have been underway for some time.
Quantum key distribution networks
DARPA
The DARPA Quantum Network, was a 10-node quantum key distribution network, which ran continuously for four years, 24 hours a day, from 2004 to 2007 in Massachusetts in the United States. It was developed by BBN Technologies, Harvard University, Boston University, with collaboration from IBM Research, the National Institute of Standards and Technology, and QinetiQ. It supported a standards-based Internet computer network protected by quantum key distribution.
SECOQC
The world's first computer network protected by quantum key distribution was implemented in October 2008, at a scientific conference in Vienna. The name of this network is SECOQC (Secure Communication Based on Quantum Cryptography) and the EU funded this project. The network used 200 km of standard fibre optic cable to interconnect six locations across Vienna and the town of St Poelten located 69 km to the west.
SwissQuantum
Id Quantique has successfully completed the longest running project for testing Quantum Key Distribution (QKD) in a field environment. The main goal of the SwissQuantum network project installed in the Geneva metropolitan area in March 2009, was to validate the reliability and robustness of QKD in continuous operation over a long time period in a field environment. The quantum layer operated for nearly 2 years until the project was shut down in January 2011 shortly after the initially planned duration of the test.
Chinese networks
In May 2009, a hierarchical quantum network was demonstrated in Wuhu, China. The hierarchical network consisted of a backbone network of four nodes connecting a number of subnets. The backbone nodes were connected through an optical switching quantum router. Nodes within each subnet were also connected through an optical switch, which were connected to the backbone network through a trusted relay.
Launched in August 2016, the QUESS space mission created an international QKD channel between China and the Institute for Quantum Optics and Quantum Information in Vienna, Austria − a ground distance of , enabling the first intercontinental secure quantum video call. By October 2017, a 2,000-km fiber line was operational between Beijing, Jinan, Hefei and Shanghai. Together they constitute the world's first space-ground quantum network. Up to 10 Micius/QUESS satellites are expected, allowing a European–Asian quantum-encrypted network by 2020, and a global network by 2030.
Tokyo QKD Network
The Tokyo QKD Network was inaugurated on the first day of the UQCC2010 conference. The network involves an international collaboration between 7 partners; NEC, Mitsubishi Electric, NTT and NICT from Japan, and participation from Europe by Toshiba Research Europe Ltd. (UK), Id Quantique (Switzerland) and All Vienna (Austria). "All Vienna" is represented by researchers from the Austrian Institute of Technology (AIT), the Institute for Quantum Optics and Quantum Information (IQOQI) and the University of Vienna.
Los Alamos National Laboratory
A hub-and-spoke network has been operated by Los Alamos National Laboratory since 2011. All messages are routed via the hub. The system equips each node in the network with quantum transmitters—i.e., lasers—but not with expensive and bulky photon detectors. Only the hub receives quantum messages. To communicate, each node sends a one-time pad to the hub, which it then uses to communicate securely over a classical link. The hub can route this message to another node using another one time pad from the second node. The entire network is secure only if the central hub is secure. Individual nodes require little more than a laser: Prototype nodes are around the size of a box of matches.
Attacks and security proofs
Intercept and resend
The simplest type of possible attack is the intercept-resend attack, where Eve measures the quantum states (photons) sent by Alice and then sends replacement states to Bob, prepared in the state she measures. In the BB84 protocol, this produces errors in the key Alice and Bob share. As Eve has no knowledge of the basis a state sent by Alice is encoded in, she can only guess which basis to measure in, in the same way as Bob. If she chooses correctly, she measures the correct photon polarization state as sent by Alice, and resends the correct state to Bob. However, if she chooses incorrectly, the state she measures is random, and the state sent to Bob cannot be the same as the state sent by Alice. If Bob then measures this state in the same basis Alice sent, he too gets a random result—as Eve has sent him a state in the opposite basis—with a 50% chance of an erroneous result (instead of the correct result he would get without the presence of Eve). The table below shows an example of this type of attack.
The probability Eve chooses the incorrect basis is 50% (assuming Alice chooses randomly), and if Bob measures this intercepted photon in the basis Alice sent he gets a random result, i.e., an incorrect result with probability of 50%. The probability an intercepted photon generates an error in the key string is then 50% × 50% = 25%. If Alice and Bob publicly compare of their key bits (thus discarding them as key bits, as they are no longer secret) the probability they find disagreement and identify the presence of Eve is
So to detect an eavesdropper with probability Alice and Bob need to compare key bits.
Man-in-the-middle attack
Quantum key distribution is vulnerable to a man-in-the-middle attack when used without authentication to the same extent as any classical protocol, since no known principle of quantum mechanics can distinguish friend from foe. As in the classical case, Alice and Bob cannot authenticate each other and establish a secure connection without some means of verifying each other's identities (such as an initial shared secret). If Alice and Bob have an initial shared secret then they can use an unconditionally secure authentication scheme (such as Carter-Wegman,) along with quantum key distribution to exponentially expand this key, using a small amount of the new key to authenticate the next session. Several methods to create this initial shared secret have been proposed, for example using a 3rd party or chaos theory. Nevertheless, only "almost strongly universal" family of hash functions can be used for unconditionally secure authentication.
Photon number splitting attack
In the BB84 protocol Alice sends quantum states to Bob using single photons. In practice many implementations use laser pulses attenuated to a very low level to send the quantum states. These laser pulses contain a very small number of photons, for example 0.2 photons per pulse, which are distributed according to a Poisson distribution. This means most pulses actually contain no photons (no pulse is sent), some pulses contain 1 photon (which is desired) and a few pulses contain 2 or more photons. If the pulse contains more than one photon, then Eve can split off the extra photons and transmit the remaining single photon to Bob. This is the basis of the photon number splitting attack, where Eve stores these extra photons in a quantum memory until Bob detects the remaining single photon and Alice reveals the encoding basis. Eve can then measure her photons in the correct basis and obtain information on the key without introducing detectable errors.
Even with the possibility of a PNS attack a secure key can still be generated, as shown in the GLLP security proof; however, a much higher amount of privacy amplification is needed reducing the secure key rate significantly (with PNS the rate scales as as compared to for a single photon sources, where is the transmittance of the quantum channel).
There are several solutions to this problem. The most obvious is to use a true single photon
source instead of an attenuated laser. While such sources are still at a developmental stage QKD has been carried out successfully with them. However, as current sources operate at a low efficiency and frequency key rates and transmission distances are limited. Another solution is to modify the BB84 protocol, as is done for example in the SARG04 protocol, in which the secure key rate scales as . The most promising solution is the decoy states in which Alice randomly sends some of her laser pulses with a lower average photon number. These decoy states can be used to detect a PNS attack, as Eve has no way to tell which pulses are signal and which decoy. Using this idea the secure key rate scales as , the same as for a single photon source. This idea has been implemented successfully first at the University of Toronto, and in several follow-up QKD experiments, allowing for high key rates secure against all known attacks.
Denial of service
Because currently a dedicated fibre optic line (or line of sight in free space) is required between the two points linked by quantum key distribution, a denial of service attack can be mounted by simply cutting or blocking the line. This is one of the motivations for the development of quantum key distribution networks, which would route communication via alternate links in case of disruption.
Trojan-horse attacks
A quantum key distribution system may be probed by Eve by sending in bright light from the quantum channel and analyzing the back-reflections in a Trojan-horse attack. In a recent research study it has been shown that Eve discerns Bob's secret basis choice with higher than 90% probability, breaching the security of the system.
Security proofs
If Eve is assumed to have unlimited resources, for example both classical and quantum computing power, there are many more attacks possible. BB84 has been proven secure against any attacks allowed by quantum mechanics, both for sending information using an ideal photon source which only ever emits a single photon at a time, and also using practical photon sources which sometimes emit multiphoton pulses. These proofs are unconditionally secure in the sense that no conditions are imposed on the resources available to the eavesdropper; however, there are other conditions required:
Eve cannot physically access Alice and Bob's encoding and decoding devices.
The random number generators used by Alice and Bob must be trusted and truly random (for example a Quantum random number generator).
The classical communication channel must be authenticated using an unconditionally secure authentication scheme.
The message must be encrypted using one-time pad like scheme
Quantum hacking
Hacking attacks target vulnerabilities in the operation of a QKD protocol or deficiencies in the components of the physical devices used in construction of the QKD system. If the equipment used in quantum key distribution can be tampered with, it could be made to generate keys that were not secure using a random number generator attack. Another common class of attacks is the Trojan horse attack which does not require physical access to the endpoints: rather than attempt to read Alice and Bob's single photons, Eve sends a large pulse of light back to Alice in between transmitted photons. Alice's equipment reflects some of Eve's light, revealing the state of Alice's basis (e.g., a polarizer). This attack can be detected, e.g. by using a classical detector to check the non-legitimate signals (i.e. light from Eve) entering Alice's system. It is also conjectured that most hacking attacks can similarly be defeated by modifying the implementation, though there is no formal proof.
Several other attacks including faked-state attacks, phase remapping attacks, and time-shift attacks are now known. The time-shift attack has even been demonstrated on a commercial quantum cryptosystem. This is the first demonstration of quantum hacking against a non-homemade quantum key distribution system. Later on, the phase-remapping attack was also demonstrated on a specially configured, research oriented open QKD system (made and provided by the Swiss company Id Quantique under their Quantum Hacking program). It is one of the first 'intercept-and-resend' attacks on top of a widely used QKD implementation in commercial QKD systems. This work has been widely reported in media.
The first attack that claimed to be able to eavesdrop the whole key without leaving any trace was demonstrated in 2010. It was experimentally shown that the single-photon detectors in two commercial devices could be fully remote-controlled using specially tailored bright illumination. In a spree of publications thereafter, the collaboration between the Norwegian University of Science and Technology in Norway and Max Planck Institute for the Science of Light in Germany, has now demonstrated several methods to successfully eavesdrop on commercial QKD systems based on weaknesses of Avalanche photodiodes (APDs) operating in gated mode. This has sparked research on new approaches to securing communications networks.
Counterfactual quantum key distribution
The task of distributing a secret key could be achieved even when the particle (on which the secret information, e.g. polarization, has been encoded) does not traverse through the quantum channel using a protocol developed by Tae-Gon Noh. serves to explain how this non-intuitive or counterfactual idea actually works. Here Alice generates a photon which, by not taking a measurement until later, exists in a superposition of being in paths (a) and (b) simultaneously. Path (a) stays inside Alice's secure device and path (b) goes to Bob. By rejecting the photons that Bob receives and only accepting the ones he doesn't receive, Bob & Alice can set up a secure channel, i.e. Eve's attempts to read the counterfactual photons would still be detected. This protocol uses the quantum phenomenon whereby the possibility that a photon can be sent has an effect even when it isn't sent. So-called interaction-free measurement also uses this quantum effect, as for example in the bomb testing problem, whereby you can determine which bombs are not duds without setting them off, except in a counterfactual sense.
History
Quantum cryptography was proposed first by Stephen Wiesner, then at Columbia University in New York, who, in the early 1970s, introduced the concept of quantum conjugate coding. His seminal paper titled "Conjugate Coding" was rejected by IEEE Information Theory but was eventually published in 1983 in SIGACT News (15:1 pp. 78–88, 1983). In this paper he showed how to store or transmit two messages by encoding them in two "conjugate observables", such as linear and circular polarization of light, so that either, but not both, of which may be received and decoded. He illustrated his idea with a design of unforgeable bank notes. A decade later, building upon this work, Charles H. Bennett, of the IBM Thomas J. Watson Research Center, and Gilles Brassard, of the University of Montreal, proposed a method for secure communication based on Wiesner's "conjugate observables". In 1990, Artur Ekert, then a PhD student at Wolfson College, University of Oxford, developed a different approach to quantum key distribution based on quantum entanglement.
Future
The current commercial systems are aimed mainly at governments and corporations with high security requirements. Key distribution by courier is typically used in such cases, where traditional key distribution schemes are not believed to offer enough guarantee. This has the advantage of not being intrinsically distance limited, and despite long travel times the transfer rate can be high due to the availability of large capacity portable storage devices. The major difference of quantum key distribution is the ability to detect any interception of the key, whereas with courier the key security cannot be proven or tested. QKD (Quantum Key Distribution) systems also have the advantage of being automatic, with greater reliability and lower operating costs than a secure human courier network.
Kak's three-stage protocol has been proposed as a method for secure communication that is entirely quantum unlike quantum key distribution in which the cryptographic transformation uses classical algorithms
Factors preventing wide adoption of quantum key distribution outside high security areas include the cost of equipment, and the lack of a demonstrated threat to existing key exchange protocols. However, with optic fibre networks already present in many countries the infrastructure is in place for a more widespread use.
An Industry Specification Group (ISG) of the European Telecommunications Standards Institute (ETSI) has been set up to address standardisation issues in quantum cryptography.
European Metrology Institutes, in the context of dedicated projects, are developing measurements required to characterise components of QKD systems.
Toshiba Europe has been awarded a prestigious Institute of Physics Award for Business Innovation. This recognises Toshiba’s pioneering QKD technology developed over two decades of research, protecting communication infrastructure from present and future cyber-threats, and commercialising UK-manufactured products which pave the road to the quantum internet. The Institute of Physics (IOP) is the professional body and learned society for physics, and the leading body for practising physicists, in the UK and Ireland. With a rich history of supporting business innovation and growth, it is committed to working with ‘physics-based’ businesses, and companies that apply and employ physics and physicists.
Toshiba also took the Semi Grand Prix award in the Solutions Category for the QKD has won the Minister of Economy, Trade and Industry Award in CEATEC AWARD 2021, the prestigious awards presented at CEATEC, Japan’s premier electronics industry trade show.
See also
List of quantum key distribution protocols
Quantum computing
Quantum cryptography
Quantum information science
Quantum network
References
External links
General and review
Quantum Computing 101
Scientific American Magazine (January 2005 Issue) Best-Kept Secrets Non-technical article on quantum cryptography
Physics World Magazine (March 2007 Issue) Non-technical article on current state and future of quantum communication
SECOQC White Paper on Quantum Key Distribution and Cryptography European project to create a large scale quantum cryptography network, includes discussion of current QKD approaches and comparison with classical cryptography
The future of cryptography May 2003 Tomasz Grabowski
ARDA Quantum Cryptography Roadmap
Lectures at the Institut Henri Poincaré (slides and videos)
Interactive quantum cryptography demonstration experiment with single photons for education
More specific information
Description of entanglement based quantum cryptography from Artur Ekert.
Description of BB84 protocol and privacy amplification by Sharon Goldwater.
Public debate on the Security of Quantum Key Distribution at the conference Hot Topics in Physical Informatics, 11 November 2013
Further information
Quantiki.org - Quantum Information portal and wiki
Interactive BB84 simulation
Quantum key distribution simulation
Online Simulation and Analysis Toolkit for Quantum Key Distribution
Quantum cryptography research groups
Experimental Quantum Cryptography with Entangled Photons
NIST Quantum Information Networks
Free Space Quantum Cryptography
Experimental Continuous Variable QKD, MPL Erlangen
Experimental Quantum Hacking, MPL Erlangen
Quantum cryptography lab. Pljonkin A.P.
Companies selling quantum devices for cryptography
AUREA Technology sells the optical building blocks for Quantum cryptography
id Quantique sells Quantum Key Distribution products
MagiQ Technologies sells quantum devices for cryptography
QuintessenceLabs Solutions based on continuous wave lasers
SeQureNet sells Quantum Key Distribution products using continuous-variables
Companies with quantum cryptography research programmes
Toshiba
Hewlett Packard
IBM
Mitsubishi
NEC
NTT
Cryptography
Quantum information science
Quantum cryptography |
2567411 | https://en.wikipedia.org/wiki/Traversal%20Using%20Relays%20around%20NAT | Traversal Using Relays around NAT | Traversal Using Relays around NAT (TURN) is a protocol that assists in traversal of network address translators (NAT) or firewalls for multimedia applications. It may be used with the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). It is most useful for clients on networks masqueraded by symmetric NAT devices. TURN does not aid in running servers on well known ports in the private network through a NAT; it supports the connection of a user behind a NAT to only a single peer, as in telephony, for example.
TURN is specified by . The TURN URI scheme is documented in .
Introduction
NATs, while providing benefits, also come with drawbacks. The most troublesome of those drawbacks is the fact that they break many existing IP applications, and make it difficult to deploy new ones. Guidelines have been developed that describe how to build "NAT friendly" protocols, but many protocols simply cannot be constructed according to those guidelines. Examples of such protocols include multimedia applications and file sharing.
Session Traversal Utilities for NAT (STUN) provides one way for an application to traverse a NAT. STUN allows a client to obtain a transport address (an IP address and port) which may be useful for receiving packets from a peer. However, addresses obtained by STUN may not be usable by all peers. Those addresses work depending on the topological conditions of the network. Therefore, STUN by itself cannot provide a complete solution for NAT traversal.
A complete solution requires a means by which a client can obtain a transport address from which it can receive media from any peer which can send packets to the public Internet. This can only be accomplished by relaying data through a server that resides on the public Internet. Traversal Using Relay NAT (TURN) is a protocol that allows a client to obtain IP addresses and ports from such a relay.
Although TURN almost always provides connectivity to a client, it is resource intensive for the provider of the TURN server. It is therefore desirable to use TURN as a last resort only, preferring other mechanisms (such as STUN or direct connectivity) when possible. To accomplish that, the Interactive Connectivity Establishment (ICE) methodology can be used to discover the optimal means of connectivity.
Protocol
The process begins when a client computer wants to contact a peer computer for a data transaction, but cannot do so due to both client and peer being behind respective NATs. If STUN is not an option because one of the NATs is a symmetric NAT (a type of NAT known to be non-STUN compatible), TURN must be used.
First, the client contacts a TURN server with an "Allocate" request. The Allocate request asks the TURN server to allocate some of its resources for the client so that it may contact a peer. If allocation is possible, the server allocates an address for the client to use as a relay, and sends the client an "Allocation Successful" response, which contains an "allocated relayed transport address" located at the TURN server.
Second, the client sends in a CreatePermissions request to the TURN server to create a permissions check system for peer-server communications. In other words, when a peer is finally contacted and sends information back to the TURN server to be relayed to client, the TURN server uses the permissions to verify that the peer-to-TURN server communication is valid.
After permissions have been created, the client has two choices for sending the actual data, (1) it can use the Send mechanism, or (2) it can reserve a channel using the ChannelBind request. The Send mechanism is more straightforward, but contains a larger header, 36 bytes, that can substantially increase the bandwidth in a TURN relayed conversation. By contrast, the ChannelBind method is lighter: the header is only 4 bytes, but it requires a channel to be reserved which needs to be periodically refreshed, among other considerations.
Using either method, Send or channel binding, the TURN server receives the data from the client and relays it to the peer using UDP datagrams, which contain as their Source Address the "Allocated Relayed Transport Address". The peer receives the data and responds, again using a UDP datagram as the transport protocol, sending the UDP datagram to the relay address at the TURN server.
The TURN server receives the peer UDP datagram, checks the permissions and if they are valid, forwards it to the client.
This process gets around even symmetric NATs because both the client and peer can at least talk to the TURN server, which has allocated a relay IP address for communication.
While TURN is more robust than STUN in that it assists in traversal of more types of NATs, a TURN communication relays the entire communication through the server requiring far more server bandwidth than the STUN protocol, which typically only resolves the public facing IP address and relays the information to client and peer for them to use in direct communication. For this reason, the ICE protocol mandates STUN usage as a first resort, and TURN usage only when dealing with symmetric NATs or other situations where STUN cannot be used.
See also
Interactive Connectivity Establishment
External links
Internet protocols
Network protocols
Network address translation
Voice over IP
Application layer protocols |
498464 | https://en.wikipedia.org/wiki/Traditional%20animation | Traditional animation | Traditional animation (or classical animation, cel animation, hand-drawn animation, 2D animation or just 2D) is an animation technique in which each frame is drawn by hand. The technique was the dominant form of animation in cinema until the advent of computer animation.
Process
Writing and storyboarding
Animation production usually begins after a story is conceived. The oral or literary source material must then be converted into an animation film script, from which the storyboard is derived. The storyboard has an appearance somewhat similar to comic book panels, and is a shot by shot breakdown of the staging, acting and any camera moves that will be present in the film. The images allow the animation team to plan the flow of the plot and the composition of the imagery. The storyboard artists will have regular meetings with the director and may have to redraw or "re-board" a sequence many times before it meets final approval.
Voice recording
Before true animation begins, a preliminary soundtrack or scratch track is recorded, so that the animation may be more precisely synchronized to the soundtrack. Given the slow, methodical manner in which traditional animation is produced, it is almost always easier to synchronize animation to a pre-existing soundtrack than it is to synchronize a soundtrack to pre-existing animation. A completed cartoon soundtrack will feature music, sound effects, and dialogue performed by voice actors. However, the scratch track used during animation typically contains only the voices, any vocal songs to which characters must sing-along, and temporary musical score tracks; the final score and sound effects are added during post-production.
In the case of Japanese anime, as well as most pre-1930 sound animated cartoons, the sound was post-synched; that is, the soundtrack was recorded after the film elements were finished by watching the film and performing the dialogue, music, and sound effects required. Some studios, most notably Fleischer Studios, continued to post-synch their cartoons through most of the 1930s, which allowed for the presence of the "muttered ad-libs" present in many Popeye the Sailor and Betty Boop cartoons.
Animatic
Usually, an animatic or story reel is created after the soundtrack is recorded, but before full animation begins. An animatic typically consists of pictures of the storyboard timed and cut together with the soundtrack. This allows the animators and directors to work out any script and timing issues that may exist with the current storyboard. The storyboard and soundtrack are amended if necessary, and a new animatic may be created and reviewed with the director until the storyboard is perfected. Editing the film at the animatic stage prevents the animation of scenes that would be edited out of the film; as traditional animation is a very expensive and time-consuming process, creating scenes that will eventually be edited out of the completed cartoon is strictly avoided.
Design and timing
The storyboards are then sent to the design departments. Character designers prepare model sheets for any characters and props that appear in the film; and these are used to help standardize appearance, poses, and gestures. The model sheets will often include "turnarounds" which show how a character or object looks in three-dimensions along with standardized special poses and expressions so that the artists working on the project can have a guide to refer to in order to deliver consistent work. Sometimes, small statues known as maquettes may be produced, so that an animator can see what a character looks like in three dimensions. Around the same time, the background stylists will do similar work for any settings and locations present in the storyboard, and the art directors and color stylists will determine the art style and color schemes to be used.
While the design is going on, the timing director (who in many cases will be the main director) takes the animatic and analyzes exactly what poses drawings, and lip movements will be needed on what frames. An exposure sheet (or X-sheet for short) is created; this is a printed table that breaks down the action, dialogue, and sound frame-by-frame as a guide for the animators. If a film is based more strongly in music, a bar sheet may be prepared in addition to or instead of an X-sheet. Bar sheets show the relationship between the on-screen action, the dialogue, and the actual musical notation used in the score.
Layout
Layout begins after the designs are completed and approved by the director. The layout process is the same as the blocking out of shots by a cinematographer on a live-action film. It is here that the background layout artists determine the camera angles, camera paths, lighting, and shading of the scene. Character layout artists will determine the major poses for the characters in the scene and will make a drawing to indicate each pose. For short films, character layouts are often the responsibility of the director.
The layout drawings and storyboards are then spliced, along with the audio and an animatic is formed (not to be confused with its predecessor, the leica reel). The term "animatic" was originally coined by Walt Disney Animation Studios.
Animation
Once the animatic is finally approved by the director, animation begins.
In the traditional animation process, animators will begin by drawing sequences of animation on sheets of transparent paper perforated to fit the peg bars in their desks, often using colored pencils, one picture or "frame" at a time. A peg bar is an animation tool used in traditional animation to keep the drawings in place. The pins in the peg bar match the holes in the paper. It is attached to the animation desk or light table, depending on which is being used. A key animator or lead animator will draw the key drawings or key frames in a scene, using the character layouts as a guide. The key animator draws enough of the frames to get across the major poses within a character performance; in a sequence of a character jumping across a gap, the key animator may draw a frame of the character as they are about to leap, two or more frames as the character is flying through the air and the frame for the character landing on the other side of the gap.
Timing is important for the animators drawing these frames; each frame must match exactly what is going on in the soundtrack at the moment the frame will appear, or else the discrepancy between sound and visual will be distracting to the audience. For example, in high-budget productions, extensive effort is given in making sure a speaking character's mouth matches in shape the sound that the character's actor is producing as they speak.
While working on a scene, a key animator will usually prepare a pencil test of the scene. A pencil test is a much rougher version of the final animated scene (often devoid of many character details and color); the pencil drawings are quickly photographed or scanned and synced with the necessary soundtracks. This allows the animation to be reviewed and improved upon before passing the work on to their assistant animators, who will add details and some of the missing frames in the scene. The work of the assistant animators is reviewed, pencil-tested, and corrected until the lead animator is ready to meet with the director and have their scene sweatboxed, or reviewed by the director, producer, and other key creative team members. Similar to the storyboarding stage, an animator may be required to redo a scene many times before the director will approve it.
In high-budget animated productions, often each major character will have an animator or group of animators solely dedicated to drawing that character. The group will be made up of one supervising animator, a small group of key animators, and a larger group of assistant animators. For scenes where two characters interact, the key animators for both characters will decide which character is "leading" the scene, and that character will be drawn first. The second character will be animated to react to and support the actions of the "leading" character.
Once the key animation is approved, the lead animator forwards the scene on to the clean-up department, made up of the clean-up animators and the inbetweeners. The clean-up animators take the lead and assistant animators' drawings and trace them onto a new sheet of paper, making sure to include all of the details present on the original model sheets, so that the film maintains a cohesiveness and consistency in art style. The inbetweeners will draw in whatever frames are still missing in-between the other animators' drawings. This procedure is called tweening. The resulting drawings are again pencil-tested and sweatboxed until they meet approval.
At each stage during pencil animation, approved artwork is spliced into the Leica reel.
This process is the same for both character animation and special effects animation, which on most high-budget productions are done in separate departments. Effects animators animate anything that moves and are not a character, including props, vehicles, machinery and phenomena such as fire, rain, and explosions. Sometimes, instead of drawings, a number of special processes are used to produce special effects in animated films; rain, for example, has been created in Disney animated films since the late 1930s by filming slow-motion footage of water in front of a black background, with the resulting film superimposed over the animation.
Pencil test
After all the drawings are cleaned up, they are then photographed on an animation camera, usually on black and white film stock. Nowadays, pencil tests can be made using a video camera and computer software.
Backgrounds
While the animation is being done, the background artists will paint the sets over which the action of each animated sequence will take place. These backgrounds are generally done in gouache or acrylic paint, although some animated productions have used backgrounds done in watercolor or oil paint. Background artists follow very closely the work of the background layout artists and color stylists (which is usually compiled into a workbook for their use) so that the resulting backgrounds are harmonious in tone with the character designs.
Traditional ink-and-paint and camera
Once the clean-ups and in-between drawings for a sequence are completed, they are prepared for photography, a process known as ink-and-paint. Each drawing is then transferred from paper to a thin, clear sheet of plastic called a cel, a contraction of the material name celluloid (the original flammable cellulose nitrate was later replaced with the more stable cellulose acetate). The outline of the drawing is inked or photocopied onto the cel, and gouache, acrylic or a similar type of paint is used on the reverse sides of the cels to add colors in the appropriate shades. In many cases, characters will have more than one color palette assigned to them; the usage of each one depends upon the mood and lighting of each scene. The transparent quality of the cel allows for each character or object in a frame to be animated on different cels, as the cel of one character can be seen underneath the cel of another; and the opaque background will be seen beneath all of the cels.
When an entire sequence has been transferred to cels, the photography process begins. Each cel involved in a frame of a sequence is laid on top of each other, with the background at the bottom of the stack. A piece of glass is lowered onto the artwork in order to flatten any irregularities, and the composite image is then photographed by a special animation camera, also called rostrum camera. The cels are removed, and the process repeats for the next frame until each frame in the sequence has been photographed. Each cel has registration holes, small holes along the top or bottom edge of the cel, which allow the cel to be placed on corresponding peg bars before the camera to ensure that each cel aligns with the one before it; if the cels are not aligned in such a manner, the animation, when played at full speed, will appear "jittery." Sometimes, frames may need to be photographed more than once, in order to implement superimpositions and other camera effects. Pans are created by either moving the cels or backgrounds 1 step at a time over a succession of frames (the camera does not pan; it only zooms in and out).
Dope sheets are created by the animators and used by the camera operator to transfer each animation drawing into the number of film frames specified by the animators, whether it is 1 (1s, ones) 2 (2s, twos) or 3 (3s, threes).
As the scenes come out of final photography, they are spliced into the Leica reel, taking the place of the pencil animation. Once every sequence in the production has been photographed, the final film is sent for development and processing, while the final music and sound effects are added to the soundtrack. Again, editing in the traditional live-action sense is generally not done in animation, but if it is required it is done at this time, before the final print of the film is ready for duplication or broadcast.
Among the most common types of animation rostrum cameras was the Oxberry. Such cameras were always made of black anodized aluminum, and commonly had 2 peg bars, 1 at the top and 1 at the bottom of the lightbox. The Oxberry Master Series had 4 peg bars, 2 above and 2 below, and sometimes used a "floating peg bar" as well. The height of the column on which the camera was mounted determined the amount of zoom achievable on a piece of artwork. Such cameras were massive mechanical affairs that might weigh close to a ton and take hours to break down or set up.
In the later years of the animation rostrum camera, stepper motors controlled by computers were attached to the various axes of movement of the camera, thus saving many hours of hand cranking by human operators. Gradually, motion control techniques were adopted throughout the industry.
Digital ink and paint processes gradually made these traditional animation techniques and equipment obsolete.
Digital ink and paint
The current process, termed "digital ink and paint", is the same as traditional ink and paint until after the animation drawings are completed; instead of being transferred to cels, the animators' drawings are either scanned into a computer or drawn directly onto a computer monitor via graphics tablets (such as Wacom Cintiq tablet), where they are colored and processed using one or more of a variety of software packages. The resulting drawings are composited in the computer over their respective backgrounds, which have also been scanned into the computer (if not digitally painted), and the computer outputs the final film by either exporting a digital video file, using a video cassette recorder or printing to film using a high-resolution output device. Use of computers allows for easier exchange of artwork between departments, studios, and even countries and continents (in most low-budget American animated productions, the bulk of the animation is actually done by animators working in other countries, including South Korea, Taiwan, Japan, China, Singapore, Mexico, India, and the Philippines). As the cost of both inking and painting new cels for animated films and TV programs and the repeated usage of older cels for newer animated TV programs and films went up and the cost of doing the same thing digitally went down, eventually, the digital ink-and-paint process became the standard for future animated movies and TV programs.
Hanna-Barbera was the first American animation studio to implement a computer animation system for digital ink-and-paint usage. Following a commitment to the technology in 1979, computer scientist Marc Levoy led the Hanna-Barbera Animation Laboratory from 1980 to 1983, developing an ink-and-paint system that was used in roughly a third of Hanna-Barbera's domestic production, starting in 1984 and continuing until replaced with third-party software in 1996. In addition to a cost savings compared to traditional cel painting of 5 to 1, the Hanna-Barbera system also allowed for multiplane camera effects evident in H-B productions such as A Pup Named Scooby-Doo (1988).
Digital ink and paint has been in use at Walt Disney Animation Studios since 1989, where it was used for the final rainbow shot in The Little Mermaid. All subsequent Disney animated features were digitally inked-and-painted (starting with The Rescuers Down Under, which was also the first major feature film to entirely use digital ink and paint), using Disney's proprietary CAPS (Computer Animation Production System) technology, developed primarily by Pixar Animation Studios. The CAPS system allowed the Disney artists to make use of colored ink-line techniques mostly lost during the xerography era, as well as multiplane effects, blended shading, and easier integration with 3D CGI backgrounds (as in the ballroom sequence in the 1991 film Beauty and the Beast), props, and characters.
While Hanna-Barbera and Disney began implementing digital inking and painting, it took the rest of the industry longer to adapt. Many filmmakers and studios did not want to shift to the digital ink-and-paint process because they felt that the digitally-colored animation would look too synthetic and would lose the aesthetic appeal of the non-computerized cel for their projects. Many animated television series were still animated in other countries by using the traditionally inked-and-painted cel process as late as 2004, though most of them switched over to the digital process at some point during their run. The last major feature film to use traditional ink and paint was Satoshi Kon's Millennium Actress (2001); the last major animation productions in the west to use the traditional process was Fox's The Simpsons and Cartoon Network's Ed, Edd n Eddy, which switched to digital paint in 2002 and 2004 respectively, while the last major animated production overall to abandon cel animation was the television adaptation of Sazae-san, which remained stalwart with the technique until September 29, 2013, when it switched to fully digital animation on October 6, 2013. Prior to this, the series adopted digital animation solely for its opening credits in 2009, but retained the use of traditional cels for the main content of each episode. Minor productions, such as Hair High (2004) by Bill Plympton, have used traditional cels long after the introduction of digital techniques. Most studios today use one of a number of other high-end software packages, such as Toon Boom Harmony, Toonz (OpenToonz), Animo, and RETAS, or even consumer-level applications such as Adobe Flash, Toon Boom Technologies and TV Paint.
Computers and digital video cameras
Computers and digital video cameras can also be used as tools in traditional cel animation without affecting the film directly, assisting the animators in their work and making the whole process faster and easier. Doing the layouts on a computer is much more effective than doing it by traditional methods. Additionally, video cameras give the opportunity to see a "preview" of the scenes and how they will look when finished, enabling the animators to correct and improve upon them without having to complete them first. This can be considered a digital form of pencil testing.
Techniques
Cels
The cel is an important innovation to traditional animation, as it allows some parts of each frame to be repeated from frame to frame, thus saving labor. A simple example would be a scene with two characters on screen, one of which is talking and the other standing silently. Since the latter character is not moving, it can be displayed in this scene using only one drawing, on one cel, while multiple drawings on multiple cels are used to animate the speaking character.
For a more complex example, consider a sequence in which a person sets a plate upon a table. The table stays still for the entire sequence, so it can be drawn as part of the background. The plate can be drawn along with the character as the character places it on the table. However, after the plate is on the table, the plate no longer moves, although the person continues to move as they draw their arm away from the plate. In this example, after the person puts the plate down, the plate can then be drawn on a separate cel from them. Further frames feature new cels of the person, but the plate does not have to be redrawn as it is not moving; the same cel of the plate can be used in each remaining frame that it is still upon the table. The cel paints were actually manufactured in shaded versions of each color to compensate for the extra layer of cel added between the image and the camera; in this example, the still plate would be painted slightly brighter to compensate for being moved one layer down. In TV and other low-budget productions, cels were often "cycled" (i.e., a sequence of cels was repeated several times), and even archived and reused in other episodes. After the film was completed, the cels were either thrown out or, especially in the early days of animation, washed clean and reused for the next film. In some cases, some of the cels were put into the "archive" to be used again and again for future purposes in order to save money. Some studios saved a portion of the cels and either sold them in studio stores or presented them as gifts to visitors.
In very early cartoons made before the use of the cel, such as Gertie the Dinosaur (1914), the entire frame, including the background and all characters and items, were drawn on a single sheet of paper, then photographed. Everything had to be redrawn for each frame containing movements. This led to a "jittery" appearance; imagine seeing a sequence of drawings of a mountain, each one slightly different from the one preceding it. The pre-cel animation was later improved by using techniques like the slash and tear system invented by Raoul Barre; the background and the animated objects were drawn on separate papers. A frame was made by removing all the blank parts of the papers where the objects were drawn before being placed on top of the backgrounds and finally photographed. The cel animation process was invented by Earl Hurd and John Bray in 1915.
Limited animation
In lower-budget productions, shortcuts available through the cel technique are used extensively. For example, in a scene in which a person is sitting in a chair and talking, the chair and the body of the person may be the same in every frame; only their head is redrawn, or perhaps even their head stays the same while only their mouth moves. This is known as limited animation. The process was popularized in theatrical cartoons by United Productions of America and used in most television animation, especially that of Hanna-Barbera. The end result does not look very lifelike, but is inexpensive to produce, and therefore allows cartoons to be made on small television budgets.
"Shooting on twos"
Moving characters are often shot "on twos", that is to say, one drawing is shown for every two frames of film (which usually runs at 24 frames per second), meaning there are only 12 drawings per second. Even though the image update rate is low, the fluidity is satisfactory for most subjects. However, when a character is required to perform a quick movement, it is usually necessary to revert to animating "on ones", as "twos" are too slow to convey the motion adequately. A blend of the two techniques keeps the eye fooled without unnecessary production costs.
Academy Award-nominated animator Bill Plympton is noted for his style of animation that uses very few in-betweens and sequences that are done on 3s or on 4s, holding each drawing on the screen from 1/8 to 1/6 of a second. While Plympton uses near-constant three-frame holds, sometimes animation that simply averages eight drawings per second is also termed "on threes" and is usually done to meet budget constraints, along with other cost-cutting measures like holding the same drawing of a character for a prolonged time or panning over a still image, techniques often used in low-budget TV productions. It is also common in anime, where fluidity is sacrificed in lieu of a shift towards complexity in the designs and shading (in contrast with the more functional and optimized designs in the Western tradition); even high-budget theatrical features such as Studio Ghibli's employ the full range: from smooth animation "on ones" in selected shots (usually quick action accents) to common animation "on threes" for regular dialogue and slow-paced shots.
Animation loops
Creating animation loops or animation cycles is a labor-saving technique for animating repetitive motions, such as a character walking or a breeze blowing through the trees. In the case of walking, the character is animated taking a step with its right foot, then a step with its left foot. The loop is created so that, when the sequence repeats, the motion is seamless.
However, because an animation loop essentially uses the same bit of animation over and over again, it is easily detected and can, in fact, become distracting to an audience. In general, they are used only sparingly by productions with moderate or high budgets.
Ryan Larkin's 1969 Academy Award-nominated National Film Board of Canada short Walking makes creative use of loops. In addition, a promotional music video from Cartoon Network's Groovies featuring the Soul Coughing song "Circles" poked fun at animation loops as they are often seen in The Flintstones, in which Fred and Barney (along with various Hanna-Barbera characters that aired on Cartoon Network), supposedly walking in a house, wonder why they keep passing the same table and vase over and over again.
Multiplane process
The multiplane process is a technique primarily used to give a sense of depth or parallax to two-dimensional animated films. To use this technique in traditional animation, the artwork is painted or placed onto separate layers called planes. These planes, typically constructed of planes of transparent glass or plexiglass, are then aligned and placed with specific distances between each plane. The order in which the planes are placed, and the distance between them, is determined by what element of the scene is on the plane as well as the entire scene’s intended depth. A camera, mounted above or in front of the planes, moves its focus toward or away from the planes during the capture of the individual animation frames. In some devices, the individual planes can be moved toward or away from the camera. This gives the viewer the impression that they are moving through the separate layers of art as though in a three-dimensional space.
History
Predecessors of this technique and the equipment used to implement it began appearing in the late 19th century. Painted glass panes were often used in matte shots and glass shots, as seen in the work of Norman Dawn. In 1923, Lotte Reiniger and her animation team constructed one of the first multiplane animation structures, a device called a Tricktisch. Its top-down, vertical design allowed for overhead adjusting of individual, stationary planes. The Tricktisch was used in the filming of The Adventures of Prince Achmed, one of Reiniger’s most well-known works. Future multiplane animation devices would generally use the same vertical design as Reiniger’s device. One notable exception to this trend was the Setback Camera, developed and used by Fleischer Studios. This device used miniature three-dimensional models of sets, with animated cels placed at various positions within the set. This placement gave the appearance of objects moving in front of and behind the animated characters, and was often referred to as the Tabletop Method.
The most famous device used for multiplane animation was the multiplane camera. This device, originally designed by former Walt Disney Studios animator/director Ub Iwerks, is a vertical, top-down camera crane that shot scenes painted on multiple, individually-adjustable glass planes. The movable planes allowed for changeable depth within individual animated scenes. In later years Disney Studios would adopt this technology for their own uses. Designed in 1937 by William Garity, the multiplane camera used for the film Snow White and the Seven Dwarfs utilized artwork painted on up to seven separate, movable planes, as well as a vertical, top-down camera.
The final animated film by Disney that featured the use of their multiplane camera was The Little Mermaid, though the work was outsourced as Disney’s equipment was inoperative at the time. Usage of the multiplane camera or similar devices declined due to production costs and the rise of digital animation. Beginning largely with the use of CAPS, digital multiplane cameras would help streamline the process of adding layers and depth to animated scenes.
Impact
The spread and development of multiplane animation helped animators tackle problems with motion tracking and scene depth, and reduced production times and costs for animated works. In a 1957 recording, Walt Disney explained why motion tracking was an issue for animators, as well as what multiplane animation could do to solve it. Using a two-dimensional still of an animated farmhouse at night, Disney demonstrated that zooming in on the scene, using traditional animation techniques of the time, increased the size of the moon. In real-life experience, the moon would not increase in size as a viewer approached a farmhouse. Multiplane animation solved this problem by separating the moon, farmhouse, and farmland into separate planes, with the moon being farthest away from the camera. To create the zoom effect, the first two planes were moved closer to the camera during filming, while the plane with the moon remained at its original distance. This provided a depth and fullness to the scene that was closer in resemblance to real life, which was a prominent goal for many animation studios at the time.
Xerography
Applied to animation by Ub Iwerks at the Walt Disney studio during the late 1950s, the electrostatic copying technique called xerography allowed the drawings to be copied directly onto the cels, eliminating much of the "inking" portion of the ink-and-paint process. This saved time and money, and it also made it possible to put in more details and to control the size of the xeroxed objects and characters (this replaced the little known, and seldom used, photographic lines technique at Disney, used to reduce the size of animation when needed). At first, it resulted in a more sketchy look, but the technique was improved upon over time.
Disney animator and engineer Bill Justice had patented a forerunner of the Xerox process in 1944, where drawings made with a special pencil would be transferred to a cel by pressure, and then fixing it. It is not known if the process was ever used in animation.
The xerographic method was first tested by Disney in a few scenes of Sleeping Beauty and was first fully used in the short film Goliath II, while the first feature entirely using this process was One Hundred and One Dalmatians (1961). The graphic style of this film was strongly influenced by the process. Some hand inking was still used together with xerography in this and subsequent films when distinct colored lines were needed. Later, colored toners became available, and several distinct line colors could be used, even simultaneously. For instance, in The Rescuers the characters' outlines are gray. White and blue toners were used for special effects, such as snow and water.
The APT process
Invented by Dave Spencer for the 1985 Disney film The Black Cauldron, the APT (Animation Photo Transfer) process was a technique for transferring the animators' art onto cels. Basically, the process was a modification of a repro-photographic process; the artists' work was photographed on high-contrast "litho" film, and the image on the resulting negative was then transferred to a cel covered with a layer of light-sensitive dye. The cel was exposed through the negative. Chemicals were then used to remove the unexposed portion. Small and delicate details were still inked by hand if needed. Spencer received an Academy Award for Technical Achievement for developing this process.
Cel overlay
A cel overlay is a cel with inanimate objects used to give the impression of a foreground when laid on top of a ready frame. This creates the illusion of depth, but not as much as a multiplane camera would. A special version of cel overlay is called line overlay, made to complete the background instead of making the foreground, and was invented to deal with the sketchy appearance of xeroxed drawings. The background was first painted as shapes and figures in flat colors, containing rather few details. Next, a cel with detailed black lines was laid directly over it, each line is drawn to add more information to the underlying shape or figure and give the background the complexity it needed. In this way, the visual style of the background will match that of the xeroxed character cels. As the xerographic process evolved, line overlay was left behind.
Computers and traditional animation
The methods mentioned above describe the techniques of an animation process that originally depended on cels in its final stages, but painted cels are rare today as the computer moves into the animation studio, and the outline drawings are usually scanned into the computer and filled with digital paint instead of being transferred to cels and then colored by hand. The drawings are composited in a computer program on many transparent "layers" much the same way as they are with cels, and made into a sequence of images which may then be transferred onto film or converted to a digital video format.
It is now also possible for animators to draw directly into a computer using a graphics tablet such as a Cintiq or a similar device, where the outline drawings are done in a similar manner as they would be on paper. The Goofy short How To Hook Up Your Home Theater (2007) represented Disney's first project based on the paperless technology available today. Some of the advantages are the possibility and potential of controlling the size of the drawings while working on them, drawing directly on a multiplane background and eliminating the need for photographing line tests and scanning.
Though traditional animation is now commonly done with computers, it is important to differentiate computer-assisted traditional animation from 3D computer animation, such as Toy Story and Ice Age. However, often traditional animation and 3D computer animation will be used together, as in Don Bluth's Titan A.E. and Disney's Tarzan and Treasure Planet. Most anime and many western animated series still use traditional animation today. DreamWorks executive Jeffrey Katzenberg coined the term "tradigital animation" to describe animated films produced by his studio which incorporated elements of traditional and computer animation equally, such as Spirit: Stallion of the Cimarron and Sinbad: Legend of the Seven Seas.
Many video games such as Viewtiful Joe, The Legend of Zelda: The Wind Waker and others use "cel-shading" animation filters or lighting systems to make their full 3D animation appear as though it were drawn in a traditional cel-style. This technique was also used in the animated movie Appleseed, and cel-shaded 3D animation is typically integrated with cel animation in Disney films and in many television shows, such as the Fox animated series Futurama. In one scene of the 2007 Pixar movie Ratatouille, an illustration of Gusteau (in his cookbook), speaks to Remy (who, in that scene, was lost in the sewers of Paris) as a figment of Remy's imagination; this scene is also considered an example of cel-shading in an animated feature. More recently, animated shorts such as Paperman, Feast, and The Dam Keeper have used a more distinctive style of cel-shaded 3D animation, capturing a look and feel similar to a 'moving painting'.
Rotoscoping
Rotoscoping is a method of traditional animation invented by Max Fleischer in 1915, in which animation is "traced" over actual film footage of actors and scenery. Traditionally, the live-action will be printed out frame by frame and registered. Another piece of paper is then placed over the live-action printouts and the action is traced frame by frame using a lightbox. The end result still looks hand-drawn but the motion will be remarkably lifelike. The films Waking Life and American Pop are full-length rotoscoped films. Rotoscoped animation also appears in the music videos for A-ha's song "Take On Me" and Kanye West's "Heartless". In most cases, rotoscoping is mainly used to aid the animation of realistically rendered human beings, as in Snow White and the Seven Dwarfs, Peter Pan, and Sleeping Beauty.
A method related to conventional rotoscoping was later invented for the animation of solid inanimate objects, such as cars, boats, or doors. A small live-action model of the required object was built and painted white, while the edges of the model were painted with thin black lines. The object was then filmed as required for the animated scene by moving the model, the camera, or a combination of both, in real-time or using stop-motion animation. The film frames were then printed on paper, showing a model made up of the painted black lines. After the artists had added details to the object not present in the live-action photography of the model, it was xeroxed onto cels. A notable example is Cruella de Vil's car in Disney's One Hundred and One Dalmatians. The process of transferring 3D objects to cels was greatly improved in the 1980s when computer graphics advanced enough to allow the creation of 3D computer-generated objects that could be manipulated in any way the animators wanted, and then printed as outlines on paper before being copied onto cels using Xerography or the APT process. This technique was used in Disney films such as Oliver and Company (1988) and The Little Mermaid (1989). This process has more or less been superseded by the use of cel-shading.
Related to rotoscoping are the methods of vectorizing live-action footage, in order to achieve a very graphical look, like in Richard Linklater's film A Scanner Darkly.
Live-action hybrids
Similar to the computer animation and traditional animation hybrids described above, occasionally a production will combine both live-action and animated footage. The live-action parts of these productions are usually filmed first, the actors pretending that they are interacting with the animated characters, props, or scenery; animation will then be added into the footage later to make it appear as if it has always been there. Like rotoscoping, this method is rarely used, but when it is, it can be done to terrific effect, immersing the audience in a fantasy world where humans and cartoons co-exist. Early examples include the silent Out of the Inkwell (begun in 1919) cartoons by Max Fleischer and Walt Disney's Alice Comedies (begun in 1923). Live-action and animation were later combined in features such as Mary Poppins (1964), Who Framed Roger Rabbit (1988), Space Jam (1996), and Enchanted (2007), among many others. The technique has also seen significant use in television commercials, especially for breakfast cereals marketed to children to interest them and boost sales.
Special effects animation
Besides traditionally animated characters, objects, and backgrounds, many other techniques are used to create special elements such as smoke, lightning and "magic", and to give the animation, in general, a distinct visual appearance. Today special effects are mostly done with computers, but earlier they had to be done by hand. To produce these effects, the animators used different techniques, such as drybrush, airbrush, charcoal, grease pencil, backlit animation, diffusing screens, filters, or gels. For instance, the Nutcracker Suite segment in Fantasia has a fairy sequence where stippled cels are used, creating a soft pastel look.
See also
History of animation
Animated cartoon
Computer generated imagery
Stop motion
Paint-on-glass animation
Rubber hose animation
List of animated feature-length films
List of animated short series
List of animated television series
List of animation studios
References
Citations
Sources
External links
Audiovisual introductions in 1915
Animation techniques
Articles containing video clips |
2458789 | https://en.wikipedia.org/wiki/IBM%20System%20z9 | IBM System z9 | IBM System z9 is a line of IBM mainframe computers. The first models were available on September 16, 2005. The System z9 also marks the end of the previously used eServer zSeries naming convention. It was also the last mainframe computer that NASA ever used.
Background
System z9 is a mainframe using the z/Architecture, previously known as ESAME. z/Architecture is a 64-bit architecture which replaces the previous 31-bit-addressing/32-bit-data ESA/390 architecture while remaining completely compatible with it as well as the older 24-bit-addressing/32-bit-data System/360 architecture. The primary advantage of this arrangement is that memory intensive applications like DB2 are no longer bounded by 31-bit memory restrictions while older applications can run without modifications.
Name change
With the announcement of the System z9 Business Class server, IBM has renamed the System z9 109 as the System z9 Enterprise Class server. IBM documentation abbreviates them as the z9 BC and z9 EC, respectively.
Notable differences
There are several functional enhancements in the System z9 compared to its zSeries predecessors. Some of the differences include:
Support Element & HMC
The Support Element is the most direct and lowest level way to access a mainframe. It circumvents even the Hardware Management Console and the operating system running on the mainframe. The HMC is a PC connected to the mainframe and emulates the Support Element. All preceding zSeries mainframes used a modified version of OS/2 with custom software to provide the interface. System z9's HMC no longer uses OS/2, but instead uses a modified version of Linux with an OS/2 lookalike interface to ease transition as well as a new interface. Unlike the previous HMC application on OS/2, the new HMC is web-based which means that even local access is done via a web browser. Remote HMC access is available, although only over an SSL encrypted HTTP connection. The web-based nature means that there is no longer a difference between local console access and remote access, which means a remote user potentially has full control if authorized, allowing more flexibility for locating systems within data centers. IBM refers to the new HMC as a "closed platform" which does not allow the user to install software or access the command line interface to increase security and stability. The HMC is also firewalled by default with a minimal number of open ports for remote access.
Program Directed Re-IPL
Program Directed Re-IPL is a new feature for Linux on System z9. It allows Linux systems running in an LPAR to re-IPL (reboot) themselves without operator intervention. This is accomplished by the System z9 storing the device and load parameters used to initially IPL the system.
DB2 and VSAM features
DB2, VSAM, and other data storage formats achieve greater I/O performance thanks to a new System z9 feature called a MIDAW. Also, the System z9 introduces the , a new type of processor that accelerates certain specific DB2 tasks. Modified Indirect Data Address Words (MIDAWs) are a channel programming capability of the IBM System z9 processor range, and all subsequent ranges. The MIDAW facility is an extension to the pre-existing Indirect Data Address Word (IDAW) channel programming capability, providing support for more efficient FICON channel programs. MIDAWs allow ECKD channel programs to read and write to many storage locations using one channel command, which means fewer signals up and down the channel are required to transfer the same amount of data. This reduction is particularly noticeable for Extended Format data sets, accessed through Media Manager. Examples include Extended Format Sequential data sets, Extended Format VSAM data sets and certain types of DB2 tablespaces. While each of these data set organizations have alternatives, each has a distinct set of advantages, whether in the area of performance, space saving (through hardware-assisted data compression), or scalability (by allowing an individual data set to exceed 4 GiB).
Java features
Java 1.4 and higher support both 32-bit and 64-bit operation on z9. The System z9 also supports the zAAP processor, which allows most of the Java workload to be offloaded from the normal instruction processors. Java workloads executed by the zAAP processor do not count towards the IBM-rated capacity of the z9. This reduces the z9's total cost of ownership compared with other IBM platforms, as otherwise IBM would raise a customer's (software) license fees after installing an additional (hardware) processor. The zAAP also enables integration of new Java based Web applications with core z/OS backend database environment for high performance, reliability, availability, and security.
Cryptography
The System z9 adds 128-Bit Advanced Encryption Standard (AES) to the list of hardware-based cryptographic algorithms. Other hardware-boosted features include additional random number generation and SHA algorithms. This specialized encryption hardware means System z9 potentially outperforms other platforms which must rely on encryption software.
LPARs
The System z9 supports up to 60 LPARs, up from the previous maximum of 30.
Larger memory capacity
The System z9 supports twice its immediate predecessors' maximum memory configurations: now up to 512 GB for the z9 EC and up to 64 GB for the z9 BC.
Concurrent system board replacement
The System z9 supports nondisruptive processor and memory replacement. That means a technician can replace an entire system board without ending any applications and without restarting any operating systems. In most configurations a System z9 can even manage this feat without any reduction in performance or capacity for the running applications.
4 Gbit FICON and FCP
In May 2006, IBM added 4 Gigabit FICON and FCP support to the System z9 for faster I/O to storage devices. IBM also added a lower cost 2-port 4 Gbit FICON/FCP I/O adapter to the System z9 option list.
Smooth subcapacity increments
Also in May 2006, IBM introduced subcapacity settings to its high end model. For the first time mainframe processors now allow small, smooth steps through the entire processor range. This feature allows IBM's customers to control their software costs precisely and to pay for only exactly as much capacity as they need without harsh price discontinuities at certain capacity increments. (IBM started offering variable subcapacity software pricing in 2000, and some other software vendors now offer similar terms, so hardware subcapacity settings are of primary interest when running so-called full capacity software products.)
Group capacity limits
Available with z/OS Release 8, Group Capacity Limits allows an installation to define a group of LPARs within a single z9 or z10 machine whose capacity usage can be limited to a specific number of MSUs. Usage is based on the rolling 4 hour average CPU consumption, also in MSUs. A group need not necessarily be the same as an LPAR Cluster. LPARs can participate whether they are in a sysplex or not.
Separate processor pools
While previous mainframe generations (including the predecessor zSeries z990) supported specialty processors, such as zAAPs and ICFs, these were all managed by PR/SM out of the same processor pool (Pool 2). The IBM System z9 EC introduced the concept of separate pools for different types of specialty processor. This greatly eases the tasking of managing and measuring the performance of the different processor types. With z9 (and IBM System z10) the following pools are defined:
1 General-purpose processors
3 IFLs
4 zAAPs
5 ICFs
6
Pool 2 is no longer used.
In addition to these 5 pools of characterized processors, there are three other categories of processor:
Service Assist Processors (for assisting with I/O operations) which all machines have.
Spare processors (to replace characterized processors in the event of a failure) which all machines have.
Unpurchased processors (which can be purchased and then characterized) which all but the most fully characterized machines have.
Models
Enterprise Class
The System z9 Enterprise Class server, formerly known as the System z9 109, was the flagship of the System z9 series until the announcement of the IBM System z10. The most powerful model, the 2094-S54, achieves approximately twice the transactional performance of its most powerful predecessor, the zSeries z990 (2084-332). A single 2094-S54 machine provides up to 54 main processors (plus scores of secondary processors), at least two spare main processors, and up to 512 GB of main memory. Minimum memory is 16 GB.
The System z9 EC is available in five hardware model configurations:
2094-S08
2094-S18
2094-S28
2094-S38
2094-S54
Business Class
On April 27, 2006, IBM announced the System z9 Business Class, also known as the z9 BC, as the successor to the zSeries z890 mainframe. IBM is positioning the z9 BC as a midrange system with a low cost of acquisition with up to twice the performance of the z890. The first z9 BCs began shipping on May 26, 2006. The z9 BC supports up to seven main processors (plus a dozen or more secondary processors). While the z9 BC can provide general purpose central processors (CPs), IBM is actively marketing the use of low cost specialty processors such as IFLs, zAAPs, and the new . (Every z9 BC can support at least three specialty engines even when maximally configured with CPs.) The z9 BC comes with a minimum of 8 GB of RAM and is expandable up to 64 GB. IBM offers kits that allow current z800 and z890 customers to upgrade to the z9 BC. A z9 BC customer can then upgrade to the z9 EC if extra capacity is required.
The System z9 BC is available in two hardware model configurations:
2096-R07
2096-S07
The seven System z9 hardware configurations support scores of software model configurations: 2094-401 through 2094-754 for the EC and 2096-A01 through 2096-Z04 for the BC (plus IFL-only models).
Pricing
The acquisition price for the System z9 ranges from "about $100,000" (IBM reported U.S. 2006 price, 2096-A01 model) to millions of dollars for the 2094-S54. (These prices are for new installations. Generally there are lower prices when upgrading from the immediate predecessor model, more like many software products and quite unlike most other hardware products.) For comparison, when new, the zSeries z890 had a starting price about twice that of the System z9 BC.
Successor machine
In February 2008, the IBM System z10 Enterprise Class was announced (and later in 2008 the z10 Business Class (BC) was announced). The z10 features quad-core technology, for up to 64 processors. The z10 has a number of power-saving, space-saving and throughput improvements compared to the z9.
References
External links
IBM.com: IBM Z mainframes homepage
IBM.com: Latest mainframe models
IBM.com: Hardware Management Console Operations Guide - Version 2.9.0
IBM.com: System z9 109 System Overview
IBM Redbooks for System z
z9
Products introduced in 2005
64-bit computers |
3387154 | https://en.wikipedia.org/wiki/Kittur | Kittur | Kittur, historically as Kittoor, is a taluka in the Belagavi district of the Indian state of Karnataka. It was part of Bailhongal taluka but was declared as an independent taluka on 23October 2012 by the Chief Minister of Karnataka on the inauguration of Kittur Utsav. It is 177th Taluk of Karnataka State. It is a place of historical importance because of the armed rebellion of Kittur Chennamma (1778–1829), Rani of the State of Kittur against the British East India Company, during which a British Commissioner, St John Thackeray was killed.
History
On the outskirts of the town lie the ruins of the palace within a fort. The palace was the residence of the Rani Chennamma.
In the 18th century, Kittur was ruled by the Marathas, until the Third Anglo-Maratha War, when it came under British suzerainty.
Kittur was ruled by Mallasaraja in the early 19th century. His only son predeceased him, and subsequently, he was succeeded by his wife, Queen Chennamma.
In connection with a disputed succession to this chiefship in 1824, St John Thackeray, Commissioner of Dharwad, was killed in a battle when approaching the Kittur fort. Later another unit stormed Kittur and captured Queen Chennamma, who was imprisoned in Bailhongal Jail where she died. Rani Chennamma became a legend.
Her death was followed by subsequent revolts by her general Sangolli Rayanna, who was also considered a hero, destroying many British officers and records. He was later hanged in 1831.
The town lends its name to the fictitious coastal town in the 2008 novel Between the Assassinations by Aravind Adiga (Belagavi District has no coast, which rules out the real Kittur being the setting).
See also
Rani Chennamma
St John Thackeray attack on Kittur
Belgaum
Bailhongal
Sangolli Rayanna
References
External links
Kittur Fort on Google Maps
Villages in Belagavi district |
50596310 | https://en.wikipedia.org/wiki/Bandstand%20%28musical%29 | Bandstand (musical) | Bandstand: The New American Musical (or simply Bandstand) is an original musical composed by Richard Oberacker with book and lyrics by Oberacker and Robert Taylor.
The first musical certified by the organization Got Your 6 tells the story of a group of veterans returning home to the United States after World War II. Struggling to fit into their old lives while dealing with the lingering effects of the war – including post-traumatic stress and survivor's guilt – they form a band composed solely of veterans to compete in a national patriotic radio contest in New York City. The winning song will be performed in a new Hollywood by the band, which will make them household names. This group of veterans play their hearts out while also providing post-war America a look into the effects the Second World War had on America's heroes.
The original production of Bandstand, directed by Andy Blankenbuehler and starring Laura Osnes, Corey Cott and Beth Leavel, premiered at the Paper Mill Playhouse in Millburn in October 2015 and opened on Broadway on April 26, 2017, closing on September 17, 2017.
Productions
Paper Mill Playhouse premiere
A workshop of Bandstand was held in September 2014 in New York City, and featured Laura Osnes, Corey Cott and Beth Leavel.
The show, retitled as The Bandstand, began previews on October 8, 2015, at the Paper Mill Playhouse in Millburn, New Jersey, before its official opening on October 18, 2015, for a limited run until November 8, 2015. Direction and choreography was by Andy Blankenbuehler, with the cast that starred Corey Cott, Laura Osnes and Beth Leavel as Donny, Julia, and Mrs. Adams, respectively. The musical contains swing, bebop, and jitterbug.
Broadway production
The musical, once again titled Bandstand, premiered on Broadway at the Bernard B. Jacobs Theatre on April 26, 2017, after starting previews on March 31, with Osnes and Cott and direction and choreography by Andy Blankenbuehler. The Broadway cast features Beth Leavel, Alex Bender, Joe Carroll, Brandon James Ellis, James Nathan Hopkins, and Geoff Packard. The production closed on Broadway on September 17, 2017, after 24 previews and 166 regular performances.
A professional recording of the Broadway production was screened in movie theaters on June 25, 2018 and June 28, 2018. The professional recording made a reappearance in theaters November 15, 2018 and November 19, 2018. During the COVID-19 pandemic, Playbill offered the professional recording of the Broadway production for streaming on their website from April 10 to April 17, 2020, with a portion of the proceeds going towards the Actors Fund of America. They also hosted a live watch party of the recording on April 11.
US national tour
A non-Equity US tour opened October 29, 2019 in College Station, Texas, starring Jennifer Elizabeth Smith as Julia Trojan and Zack Zaromatidis as Donny Novitski. On April 16, 2020, it was announced that the tour would not resume following its early closure due to the COVID-19 pandemic. The last performance was on March 12, 2020 in Easton, Pennsylvania.
Synopsis
Act I
Newly back from the front lines, the young vet, pianist, and singer/songwriter Donny Novitski returns home from the war to Cleveland, Ohio in 1945, to find an America eager to get back to life ("Just Like It Was Before"). Unable to find a place in post-war Cleveland for himself, Donny hears of a National Radio Swing Band Competition in Tribute to the Troops, and hatches a plan to create a band composed entirely of fellow vets for a shot at instant fame and Hollywood fortune ("Donny Novitski"). With a statewide competition to win first, Donny puts his band together: Jimmy Campbell on saxophone and clarinet, Davy Zlatic on bass, Nick Radel on trumpet, Wayne Wright on trombone, and Johnny Simpson on drums ("I Know a Guy"). Each of the members of the band have struggled with the adjustment from military life to life at home, but they find friendship and commonality over this shared goal.
Following their first performance ("Ain't We Proud"), Donny makes good on a promise to check in on Julia Trojan, the young widow of the man who had been his best friend in the war, Michael. Meanwhile, the vets in Donny's new band try desperately to readjust to civilian life ("Proud Riff"). As Julia prepares to host Donny for dinner in the hope of learning more about her husband's death, she confides in her mother, June Adams that she just wants to be who she was before her husband's death ("Who I Was"). During dinner, Donny avoids discussing the death of Julia's husband. Later, haunted by the memories of war, the band members play their instruments, and Donny arrives at the church where Julia sings ("Counterpoint/Pie Jesu"). Impressed by her voice, Donny invites Julia to hear the band he's put together. Excitedly, Julia's mother hopes that all will soon be "Just Like It Was Before (Reprise)".
At the club, Donny invites Julia to sing a standard with the band ("First Steps First"). After the performance, Donny convinces a hesitant Julia to join the band, and they begin rehearsing in earnest ("Breathe") for their first gig together, trying out a new tune, "You Deserve It". When Donny's confidence in winning the competition falters, Julia offers him a journal of poems she has written. Inspired by Julia's take on post-war life, Donny composes a melody for one of her poems that he is convinced will win them the preliminary in Ohio which guarantees them a slot on the final broadcast in New York City. The Ohio broadcast is in full swing ("Dwight Anson & Jean Ann"), as the Donny Nova Band featuring Julia Trojan takes the stage with "Love Will Come and Find Me Again". The band wins the state preliminary, but are told they have to pay their own way to New York and are required to compete in a second round of elimination to secure their spot in the final broadcast. From disbelief and despair, Donny rallies his band of brothers with a vision of a world where they are recognized for their sacrifices and talents ("Right This Way").
Act II
With renewed determination that "Nobody" tells them 'no', the Band begins playing every available club in Cleveland. Their growing number of fans celebrate that "The Boys Are Back". Julia and Donny continue their songwriting collaboration with "I Got a Theory" about Cleveland itself, raising more money and hometown support. With the New York trip imminent, and the bond between them growing stronger, Julia presses Donny for the truth of her husband's death in battle. Horrified by the revelation that Donny accidentally caused Michael's death in a friendly fire incident, Julia abandons the band. After expressing her feelings to her mother, June tells Julia that sometimes "Everything Happens" without reason or fault. After deep reflection, Julia returns to Donny with a new poem as an apology, which paints a raw and truthful portrait of Donny and the Band members: Johnny was severely injured and suffers from chronic pain, amnesia, and cognitive issues; Nick was a prisoner of war, giving him anger and trust issues; Davy liberated Dachau and has turned to alcohol and humor to cope with the memories; Wayne suffers from mental illness (likely OCD), meaning his children no longer recognize him and his marriage breaks down; Jimmy focuses on his law studies to avoid letting anybody in after terrible loss; Donny is an insomniac who is experiencing survivor's guilt after the death of Michael, Julia's husband. Inspired, Donny sets Julia's new poem to music, but both of them realize that the lyrics must be rewritten if the song is ever to be performed in public; they turn it into a love song about a girl and her returning soldier ("Welcome Home").
After having received a generous donation from their hometown fans, the Band sets off to live their dream of being "A Band in New York City". After a magical first night in New York, Donny and Julia find themselves outside her hotel room door, finally admitting their true feelings for one another ("This is Life"). Backstage at the final broadcast, moments before their appearance, the band realizes that the fine print of the contract they've signed is a trap, and the promised prize a sham. Refusing to allow their military service to be sentimentalized and exploited by the contest promoters, and unwilling to give away the rights to his song, Donny convinces the band to make the riskiest choice of all, and fight for themselves, and he and Julia kiss ("This is Life (Reprise)"). Live on air for the entire country to hear, the band stages a virtual coup d'état of the broadcast as Julia sings every brutally honest word of her original poem ("Welcome Home (Finale)").
In an "Epilogue", a year later, the Donny Nova Band featuring Julia Trojan find themselves to be celebrated stars, with sold-out New York concerts and a nationwide tour.
Musical numbers
Paper Mill Playhouse 2015
Source: Stage View
Act I
"Just Like It Was Before" – Flora, Oscar, Donny, and Ensemble
"Donny Novitski" – Donny
"I Know a Guy" – Jimmy, Davy, Nick, Donny, Wayne, Johnny, and Ensemble
"Ain't We Proud" – Donny, Jimmy, Johnny, Davy, Nick, Wayne, and Ensemble
"Men Never Like to Talk" – Mrs. Adams
"Counterpoint/Pie Jesu" – Julia and Ensemble
"First Steps First" – Julia and Donny
"Will That Be All?" – Julia, Donny, and Dolores
"You Deserve It" – Donny and Julia, the Band, and Ensemble
"What's the Harm in That?" – Julia
"Worth It" – Donny and Julia, with the Band
"Right This Way" – Donny and the Band
Act 2
"Nobody" – Donny, Wayne, Nick, Davy, Julia, Johnny, Jimmy, and Ensemble
"Love Will Come and Find Me Again" – Julia
"I Got a Theory" – Donny, Julia, the Band, and Ensemble
"Everything Happens" – Mrs. Adams
"Welcome Home" – Julia, with the Band
"A Band in New York City" – Donny, Julia, the Band, and Ensemble
"Give Me a Reason" – Donny
"Worth It (Reprise)" – Donny and Julia, with the Band
"Welcome Home (Finale)" – Julia, with the Band
Broadway 2017
Act I
"Just Like It Was Before" – Company
"Donny Novitski" – Donny
"I Know a Guy" – Jimmy, Davy, Nick, Wayne, Donny and Company
"Ain't We Proud" – Donny and The Band
"Proud Riff" – Orchestra
"Who I Was" – Julia
"Counterpoint/Pie Jesu" – Julia and The Band
"Just Like It Was Before (Reprise)" – Mrs. Adams
"First Steps First" – Julia, Donny and The Band
"Breathe" – Donny, Nick, Wayne, Davy, Jimmy, Johnny and Julia
"You Deserve It" – Donny, Julia and The Band
"Dwight Anson & Jean Ann" – Jean Ann and Orchestra
"Love Will Come and Find Me Again" – Julia and The Band
"Right This Way" – Donny, Nick, Wayne, Davy, Jimmy, Johnny and Julia
Act 2
Entr'acte – Orchestra †
"Nobody" – Donny, Wayne, Nick, Davy, Julia, Mrs. Adams, Johnny, Jimmy and Company
"The Boys Are Back" – Company
"I Got a Theory" – Julia, Donny, Wayne, Nick, Davy, Johnny, Jimmy and Company
"Everything Happens" – Mrs. Adams
"Welcome Home" – Donny, Julia and The Band
"A Band in New York City" – Johnny, Davy, Jimmy, Nick, Wayne, Julia, Donny and Company
"This is Life" – Donny and Julia
"This is Life (Reprise)" – Donny
"Welcome Home (Finale)" – Julia and The Band
"Epilogue" – Donny, Julia, The Band and Company
† Not featured on Original Broadway Cast Recording
Characters and original cast
The characters and original cast:
Notable Broadway replacements
Joey Pero was injured in February 2017 and did not move with the musical to Broadway. He joined the Broadway production on June 30, 2017 in his original role of Nick, as well as performing as Nick when Bandstand had its two-night-only release in movie theaters all throughout the United States.
Carleigh Bettiol replaced Jessica Lea Patty as Jo, Julia's understudy, and a member of the ensemble on August 1, 2017.
Critical reception
Reviews
The Broadway production of Bandstand received mixed reviews from critics after opening on April 26, 2017. Critics across the board praised Andy Blankenbuehler for his choreography, which the New York Theatre Guide calls "superb in the extreme." The Chicago Tribune describes Blankenbuehler's choreography as having a "unique kindness and fragility." The main criticism of Bandstand regards the plot and character development. The show is applauded for its attempt to delve into the deeper issues that plagued those who came home after World War II. Despite this attempt, critics found the plot "a little too sweet" as it follows a predictable timeline. The unconventional look into the characters that make up the Donny Nova Band give the show a "raw nerve", but the veterans were provided with "few defining musical moments of their own."
Got Your 6 Certification
Starting in late 2015, writers Richard Oberacker and Robert Taylor received feedback from Got Your 6 to ensure that Bandstand was an accurate portrayal of WWII veterans. This led to a discussion between real veterans and the cast, engaging the cast and crew on how to avoid stereotypes and create characters that resemble real people. The collaboration with Got Your 6 resulted in Bandstand being the "first theater production to be 6 Certified for the show's reasonable and accurate veteran portrayals."
Awards and nominations
Original Broadway Production
References
External links
Internet Broadway Database
2015 musicals
Broadway musicals
Tony Award-winning musicals
Musicals about World War II |
5491367 | https://en.wikipedia.org/wiki/U.S.%20Route%2030%20in%20Indiana | U.S. Route 30 in Indiana | U.S. Route 30 (US 30) is a road in the United States Numbered Highway System that runs from Astoria, Oregon, to Atlantic City, New Jersey. In Indiana, the route runs from the Illinois state line at Dyer to the Ohio state line east of Fort Wayne and New Haven. The of US 30 that lie within Indiana serve as a major conduit. The entire length of U.S. Route 30 in Indiana is included in the National Highway System (NHS). The highway includes four-lane, rural sections, an urbanized, four-lane divided expressway, and several high-traffic, six-lane freeway areas. First designated as a US Highway in 1926, US 30 replaced the original State Road 2 (SR 2) and SR 44 designation of the highway which dated back to the formation of the Indiana State Road system. A section of the highway originally served as part of the Lincoln Highway. Realignment and construction projects have expanded the highway to four lanes across the state, and the road is now part of a long stretch of US 30 from New Lenox, Illinois, to Canton, Ohio, where the road has at least four lanes (excluding ramps). There are over 40 traffic signals between I-65 at Merrillville and I-69 at Fort Wayne.
Route description
The entire length of U.S. Route 30 in Indiana is included in the National Highway System (NHS), a network of highways that are identified as being most important for the economy, mobility and defense of the United States. The highway is maintained by the Indiana Department of Transportation (INDOT), similar to all other U.S. Highways in the state. The department tracks the traffic volumes along all state highways as a part of its maintenance responsibilities using a metric called average annual daily traffic (AADT), calculated along a segment of roadway for any average day of the year. In 2010, INDOT figured that lowest traffic levels were 10,870 vehicles and 4,750 commercial vehicles used the highway daily between US 31 and SR 331. The peak traffic volumes were 69,280 vehicles and 12,660 commercial vehicles AADT along a section of US 30 that is concurrent with I-69, between the Lima Road (interchange 311) and Coldwater Road (interchange 312) exits in Fort Wayne.
Illinois to Valparaiso
US 30 enters Dyer from Lynwood, Illinois, along the original alignment of the Lincoln Highway, as a four-lane divided highway. At Moeller Street, the roadway becomes a four-lane highway with a center turn lane before reaching an at-grade intersection with CSX railroad tracks. Thereafter, the road returns to four-lane divided highway before a traffic light at US 41 in Schererville and passing under Norfolk Southern railroad tracks. After US 41, the original alignment of the Lincoln Highway leaves US 30 and continues along the same route as old State Road 330 (SR 330). US 30 begins to curve towards the southeast, still as a four-lane divided highway. The highway has a traffic light at SR 55, heading east as the roadway enters Merrillville, where the route becomes a six-lane divided highway and has an interchange at Interstate 65 (I-65). At Colorado Street in Merrillville, the road narrows back to a four-lane divided highway.
After a traffic light at the southern terminus of SR 51 in Hobart, the original alignment of the Lincoln Highway rejoins US 30. The highway passes through a mix of farmland and residential properties on the way to Valparaiso, entering the city and passing through commercial properties. The highway has a traffic light at SR 2 at the western end of the concurrency of the two roads. From there, the road crosses railroad tracks, passes south of Valparaiso University, and has a traffic light at the eastern terminus of SR 130. After passing the traffic light at SR 130, the road has a full interchange with SR 49 and the eastern terminus of the SR 2 and US 30 concurrency. Continuing east, the road passes the Porter County Municipal Airport and proceeds east-southeast from Valparaiso, towards Plymouth.
Valparaiso to Allen County
After leaving the Valparaiso area, US 30 passes through rural farmland, with an intersection at US 421 northeast of Wanatah and an at-grade railroad crossing with the Chesapeake and Indiana Railroad. East of the railroad tracks is an intersection with SR 39 and a bridge across the Kankakee River. Then the route briefly swings slightly to the north of the old Lincoln Highway alignment to accommodate an interchange at US 35.
US 30 runs along the north side of Plymouth, passing through an interchange with the northern terminus of SR 17 and near the Plymouth Municipal Airport. The route curves around the northeast side of the city, having a major interchange with US 31 before heading east-southeast towards Warsaw. At Bourbon, the highway has an interchange with SR 331. The road curves east before entering Warsaw and has an interchange with SR 15, south of the Warsaw Municipal Airport. After passing the airport, the road enters a mix of commercial and residential properties. As it bypasses Warsaw the highway passes through a highly commercial area and has nine traffic signals within four miles, causing frequent traffic backups. One of these is a traffic light at an old alignment of the Lincoln Highway, before US 30 passes north of Winona Lake and heads towards Columbia City.
At Columbia City, the road turns southeast and has traffic lights at SR 109, SR 9, and SR 205, again closely spaced, resulting in frequent congestion. After SR 205, US 30 heads east towards Fort Wayne, paralleling the Chicago, Fort Wayne and Eastern Railroad.
Allen County to Ohio
Western Allen County
US 30 crosses into Allen County at a signalized intersection with Whitley County Road 800 East (signed as County Line Road). After passing a pair of abandoned rest areas, the four-lane divided highway with partial access control then becomes a full access controlled freeway just east of the signalized intersection at Kroemer Road. Immediately thereafter, there is a trumpet interchange with US 33 (Goshen Road), at the western terminus of US 33's concurrency with US 30. From there, the joined routes proceed southeast as a six-lane (counting auxiliary lanes) freeway, passing under Hillegas Road, to a cloverleaf interchange with I-69. At that junction, US 33 joins southbound I-69 (and westbound US 24), while US 30 loops to the north, to run concurrent with both northbound I-69 and eastbound US 24. The through lanes revert to an urban arterial and continue southeast into Fort Wayne as Goshen Road, carrying SR 930 only as far as Coliseum Boulevard (where it departs to the east, leaving Goshen Road to revert to an undivided city street).
Fort Wayne to New Haven
US 30's concurrency with I-69 is a six-lane urban interstate with interchanges at Lima Road (US 27 and SR 3) and Coldwater Road (formerly SR 327 and prior to that, US 27). At the interchange of I-69 and I-469, US 30 heads east concurrent with I-469 to loop around the north and east sides of Fort Wayne, heading toward New Haven. I-469 is a four-lane interstate passing through a mix of farmland and suburban residential properties. Initially proceeding east, the interstate crosses the St. Joseph River and has an interchange at Maplecrest Road before turning southeast, then south around the northeast side of Fort Wayne to subsequent interchanges with SR 37 followed by US 24. After the US 24 interchange, the interstate crosses the Maumee River and Norfolk Southern railroad tracks before US 30 departs I-469 east of downtown New Haven at the eastern terminus of SR 930.
Eastern Allen County
After I-469, US 30 heads southeast away from New Haven, passing through rural farmland as a four-lane divided highway with partial access control. The route bypasses the tiny hamlets of Zulu, Tillman, and Townley with an intersection at SR 101 just to the north of the latter. US 30 completes its journey across the Hoosier State and enters Ohio (at State Line Road), continuing southeast toward Van Wert.
History
The Lincoln Highway was planned in 1913 to run west to east across Indiana, including to South Bend and Fort Wayne. In 1915, the highway opened and passed through downtown Fort Wayne on its route through Indiana, and was assigned the designation of Main Market route number 2 in 1917. Further designations saw the route become SR 2 from the Illinois state line to Valparaiso, SR 44 Valparaiso to Fort Wayne and SR 2 from Fort Wayne to the Ohio state line. In the early 1920s, the Lincoln Highway was moved south between Valparaiso and Fort Wayne, to what is now known mostly as Old US 30, passing through Plymouth and Warsaw. A section of US 30 in Dyer known as the "ideal section" of the Lincoln Highway was opened in 1923 and rebuilt in the 1990s. In 1924, the sections of the road that were part of the original Lincoln Highway was paved, followed by the paving of the rest of US 30, which was commissioned in 1926. In 1927, a small realignment between Hanna and SR 29 (current US 35) took place.
During the 1950s, US 30 in Fort Wayne was rerouted to a "circumurban" highway that was built along portions of the alignments of Beuter Road and California Road, to bypass most of Fort Wayne. But this "circumurban" route, later renamed Coliseum Boulevard since it passes directly by the Allen County War Memorial Coliseum, quickly became a congested urban highway in its own right as it was not built to freeway standards. In 1998, US 30 in Fort Wayne was again rerouted onto I-69 and I-469, becoming a true controlled access freeway bypass for most of Fort Wayne and New Haven on the north and east side of the two cities. The old Coliseum Boulevard routing was assigned the SR 930 designation as a result, when local officials refused to let INDOT fully decommission the route and turn responsibility for it over to the cities or the county.
Major intersections
See also
References
External links
30
Indiana
U.S. Route 030 in Indiana
Expressways in the United States
Transportation in Fort Wayne, Indiana
Transportation in Lake County, Indiana
Transportation in Porter County, Indiana
Transportation in LaPorte County, Indiana
Transportation in Starke County, Indiana
Transportation in Marshall County, Indiana
Transportation in Kosciusko County, Indiana
Transportation in Whitley County, Indiana
Transportation in Allen County, Indiana |
735661 | https://en.wikipedia.org/wiki/Deterrence%20theory | Deterrence theory | Deterrence theory refers to scholarship and practice on how threats or limited force by one party can convince another party to refrain from initiating some course of action. The topic gained increased prominence as a military strategy during the Cold War with regard to the use of nuclear weapons and is related to but distinct from the concept of mutual assured destruction, which models the preventative nature of full-scale nuclear attack that would devastate both parties in a nuclear war. The central problem of deterrence revolves around how to credibly threaten military action or nuclear punishment on the adversary despite its costs to the deterrer.
Deterrence is widely defined as any use of threats (implicit or explicit) or limited force intended to dissuade an actor from taking an action (i.e. maintain the status quo). Deterrence is unlike compellence, which is the attempt to get an actor (such as a state) to take an action (i.e. alter the status quo). Both are forms of coercion. Compellence has been characterized as harder to successfully implement than deterrence. Deterrence also tends to be distinguished from defense or the use of full force in wartime.
Deterrence is most likely to be successful when a prospective attacker believes that the probability of success is low and the costs of attack are high. The central problem of deterrence is to credibly communicate threats. Deterrence does not necessarily require military superiority.
"General deterrence" is considered successful when an actor who might otherwise take an action refrains from doing so due to the consequences that the deterrer is perceived likely to take. "Immediate deterrence" is considered successful when an actor seriously contemplating immediate military force or action refrains from doing so. Scholars distinguish between "extended deterrence" (the protection of allies) and "direct deterrence" (protection of oneself). Rational deterrence theory holds that an attacker will be deterred if they believe that:(Probability of deterrer carrying out deterrent threat x Costs if threat carried out) > (Probability of the attacker accomplishing the action x Benefits of the action)This model is frequently simplified as:Costs x P(Costs) > Benefits x P(Benefits)
History
Most of the innovative work on deterrence theory occurred from the late 1940s to mid-1960s. Historically, scholarship on deterrence has tended to focus on nuclear deterrence. Since the end of the Cold War, there has been an extension of deterrence scholarship to areas that are not specifically about nuclear weapons.
A distinction is sometimes made between nuclear deterrence and "conventional deterrence."
The two most prominent deterrent strategies are "denial" (denying the attack the benefits of attack) and "punishment" (inflicting costs on the attacker).
Concept
The use of military threats as a means to deter international crises and war has been a central topic of international security research for at least 200 years.
The concept of deterrence can be defined as the use of threats in limited force by one party to convince another party to refrain from initiating some course of action. In Arms and Influence (1966), Schelling offers a broader definition of deterrence, as he defines it as "to prevent from action by fear of consequences." Glenn Snyder also offers a broad definition of deterrence, as he argues that deterrence involves both the threat of sanction and the promise of reward.
A threat serves as a deterrent to the extent that it convinces its target not to carry out the intended action because of the costs and losses that target would incur. In international security, a policy of deterrence generally refers to threats of military retaliation directed by the leaders of one state to the leaders of another in an attempt to prevent the other state from resorting to the use of military force in pursuit of its foreign policy goals.
As outlined by Huth, a policy of deterrence can fit into two broad categories: preventing an armed attack against a state's own territory (known as direct deterrence) or preventing an armed attack against another state (known as extended deterrence). Situations of direct deterrence often occur if there is a territorial dispute between neighboring states in which major powers like the United States do not directly intervene. On the other hand, situations of extended deterrence often occur when a great power becomes involved. The latter case has generated most interest in academic literature. Building on the two broad categories, Huth goes on to outline that deterrence policies may be implemented in response to a pressing short-term threat (known as immediate deterrence) or as strategy to prevent a military conflict or short-term threat from arising (known as general deterrence).
A successful deterrence policy must be considered in military terms but also political terms: International relations, foreign policy and diplomacy. In military terms, deterrence success refers to preventing state leaders from issuing military threats and actions that escalate peacetime diplomatic and military co-operation into a crisis or militarized confrontation that threatens armed conflict and possibly war. The prevention of crises of wars, however, is not the only aim of deterrence. In addition, defending states must be able to resist the political and the military demands of a potential attacking nation. If armed conflict is avoided at the price of diplomatic concessions to the maximum demands of the potential attacking nation under the threat of war, it cannot be claimed that deterrence has succeeded.
Furthermore, as Jentleson et al. argue, two key sets of factors for successful deterrence are important: a defending state strategy that balances credible coercion and deft diplomacy consistent with the three criteria of proportionality, reciprocity, and coercive credibility and minimizes international and domestic constraints and the extent of an attacking state's vulnerability as shaped by its domestic political and economic conditions. In broad terms, a state wishing to implement a strategy of deterrence is most likely to succeed if the costs of noncompliance that it can impose on and the benefits of compliance it can offer to another state are greater than the benefits of noncompliance and the costs of compliance.
Deterrence theory holds that nuclear weapons are intended to deter other states from attacking with their nuclear weapons, through the promise of retaliation and possibly mutually assured destruction. Nuclear deterrence can also be applied to an attack by conventional forces. For example, the doctrine of massive retaliation threatened to launch US nuclear weapons in response to Soviet attacks.
A successful nuclear deterrent requires a country to preserve its ability to retaliate by responding before its own weapons are destroyed or ensuring a second-strike capability. A nuclear deterrent is sometimes composed of a nuclear triad, as in the case of the nuclear weapons owned by the United States, Russia, the China and India. Other countries, such as the United Kingdom and France, have only sea-based and air-based nuclear weapons.
Proportionality
Jentleson et al. provides further detail in relation to those factors. Proportionality refers to the relationship between the defending state's scope and nature of the objectives being pursued and the instruments available for use to pursue them. The more the defending state demands of another state, the higher that state's costs of compliance and the greater need for the defending state's strategy to increase the costs of noncompliance and the benefits of compliance. That is a challenge, as deterrence is by definition a strategy of limited means. George (1991) goes on to explain that deterrence sometimes goes beyond threats to the actual use of military force, but if force is actually used, it must be limited and fall short of full-scale use to succeed.
The main source of disproportionality is an objective that goes beyond policy change to regime change, which has been seen in Libya, Iraq, and North Korea. There, defending states have sought to change the leadership of a state and to policy changes relating primarily to their nuclear weapons programs.
Reciprocity
Secondly, Jentleson et al. outlines that reciprocity involves an explicit understanding of linkage between the defending state's carrots and the attacking state's concessions. The balance lies in not offering too little, too late or for too much in return and not offering too much, too soon, or for too little return.
Coercive credibility
Finally, coercive credibility requires that in addition to calculations about costs and benefits of co-operation, the defending state convincingly conveys to the attacking state that failure to co-operate has consequences. Threats, uses of force, and other coercive instruments such as economic sanctions must be sufficiently credible to raise the attacking state's perceived costs of noncompliance. A defending state having a superior military capability or economic strength in itself is not enough to ensure credibility. Indeed, all three elements of a balanced deterrence strategy are more likely to be achieved if other major international actors like the UN or NATO are supportive, and opposition within the defending state's domestic politics is limited.
The other important considerations outlined by Jentleson et al. that must be taken into consideration is the domestic political and economic conditions in the attacking state affecting its vulnerability to deterrence policies and the attacking state's ability to compensate unfavourable power balances. The first factor is whether internal political support and regime security are better served by defiance, or there are domestic political gains to be made from improving relations with the defending state. The second factor is an economic calculation of the costs that military force, sanctions, and other coercive instruments can impose and the benefits that trade and other economic incentives may carry. That is partly a function of the strength and flexibility of the attacking state's domestic economy and its capacity to absorb or counter the costs being imposed. The third factor is the role of elites and other key domestic political figures within the attacking state. To the extent that such actors' interests are threatened with the defending state's demands, they act to prevent or block the defending state's demands.
Rational deterrence theory
One approach to theorizing about deterrence has entailed the use of rational choice and game-theoretic models of decision making (see game theory). Rational deterrence theory entails:
Rationality: actors are rational
Unitary actor assumption: actors are understood as unitary
Dyads: interactions tend to be between dyads (or triads) of states
Strategic interactions: actors consider the choices of other actors
Cost-benefit calculations: outcomes reflect actors' cost-benefit calculations
Deterrence theorists have consistently argued that deterrence success is more likely if a defending state's deterrent threat is credible to an attacking state. Huth outlines that a threat is considered credible if the defending state possesses both the military capabilities to inflict substantial costs on an attacking state in an armed conflict, and the attacking state believes that the defending state is resolved to use its available military forces. Huth goes on to explain the four key factors for consideration under rational deterrence theory: the military balance, signaling and bargaining power, reputations for resolve, interests at stake.
The American economist Thomas Schelling brought his background in game theory to the subject of studying international deterrence. Schelling's (1966) classic work on deterrence presents the concept that military strategy can no longer be defined as the science of military victory. Instead, it is argued that military strategy was now equally, if not more, the art of coercion, intimidation and deterrence. Schelling says the capacity to harm another state is now used as a motivating factor for other states to avoid it and influence another state's behavior. To be coercive or deter another state, violence must be anticipated and avoidable by accommodation. It can therefore be summarized that the use of the power to hurt as bargaining power is the foundation of deterrence theory and is most successful when it is held in reserve.
In an article celebrating Schelling's Nobel Memorial Prize for Economics, Michael Kinsley, Washington Post op‑ed columnist and one of Schelling's former students, anecdotally summarizes Schelling's reorientation of game theory thus: "[Y]ou're standing at the edge of a cliff, chained by the ankle to someone else. You'll be released, and one of you will get a large prize, as soon as the other gives in. How do you persuade the other guy to give in, when the only method at your disposal—threatening to push him off the cliff—would doom you both? Answer: You start dancing, closer and closer to the edge. That way, you don't have to convince him that you would do something totally irrational: plunge him and yourself off the cliff. You just have to convince him that you are prepared to take a higher risk than he is of accidentally falling off the cliff. If you can do that, you win."
Military balance
Deterrence is often directed against state leaders who have specific territorial goals that they seek to attain either by seizing disputed territory in a limited military attack or by occupying disputed territory after the decisive defeat of the adversary's armed forces. In either case, the strategic orientation of potential attacking states generally is for the short term and is driven by concerns about military cost and effectiveness. For successful deterrence, defending states need the military capacity to respond quickly and strongly to a range of contingencies. Deterrence often fails if either a defending state or an attacking state underestimates or overestimates the other's ability to undertake a particular course of action.
Signaling and bargaining power
The central problem for a state that seeks to communicate a credible deterrent threat by diplomatic or military actions is that all defending states have an incentive to act as if they are determined to resist an attack in the hope that the attacking state will back away from military conflict with a seemingly resolved adversary. If all defending states have such incentives, potential attacking states may discount statements made by defending states along with any movement of military forces as merely bluffs. In that regard, rational deterrence theorists have argued that costly signals are required to communicate the credibility of a defending state's resolve. Those are actions and statements that clearly increase the risk of a military conflict and also increase the costs of backing down from a deterrent threat. States that bluff are unwilling to cross a certain threshold of threat and military action for fear of committing themselves to an armed conflict.
Reputations for resolve
There are three different arguments that have been developed in relation to the role of reputations in influencing deterrence outcomes. The first argument focuses on a defending state's past behavior in international disputes and crises, which creates strong beliefs in a potential attacking state about the defending state's expected behaviour in future conflicts. The credibilities of a defending state's policies are arguably linked over time, and reputations for resolve have a powerful causal impact on an attacking state's decision whether to challenge either general or immediate deterrence. The second approach argues that reputations have a limited impact on deterrence outcomes because the credibility of deterrence is heavily determined by the specific configuration of military capabilities, interests at stake, and political constraints faced by a defending state in a given situation of attempted deterrence. The argument of that school of thought is that potential attacking states are not likely to draw strong inferences about a defending states resolve from prior conflicts because potential attacking states do not believe that a defending state's past behaviour is a reliable predictor of future behavior. The third approach is a middle ground between the first two approaches and argues that potential attacking states are likely to draw reputational inferences about resolve from the past behaviour of defending states only under certain conditions. The insight is the expectation that decisionmakers use only certain types of information when drawing inferences about reputations, and an attacking state updates and revises its beliefs when a defending state's unanticipated behavior cannot be explained by case-specific variables.
An example shows that the problem extends to the perception of the third parties as well as main adversaries and underlies the way in which attempts at deterrence can fail and even backfire if the assumptions about the others' perceptions are incorrect.
Interests at stake
Although costly signaling and bargaining power are more well established arguments in rational deterrence theory, the interests of defending states are not as well known. Attacking states may look beyond the short-term bargaining tactics of a defending state and seek to determine what interests are at stake for the defending state that would justify the risks of a military conflict. The argument is that defending states that have greater interests at stake in a dispute are more resolved to use force and more willing to endure military losses to secure those interests. Even less well-established arguments are the specific interests that are more salient to state leaders such as military interests and economic interests.
Furthermore, Huth argues that both supporters and critics of rational deterrence theory agree that an unfavorable assessment of the domestic and international status quo by state leaders can undermine or severely test the success of deterrence. In a rational choice approach, if the expected utility of not using force is reduced by a declining status quo position, deterrence failure is more likely since the alternative option of using force becomes relatively more attractive.
Tripwires
International relations scholars Dan Reiter and Paul Poast have argued that so-called "tripwires" do not deter aggression. Tripwires entail that small forces are deployed abroad with the assumption that an attack on them will trigger a greater deployment of forces. Dan Altman has argued that tripwires do work to deter aggression, citing the Western deployment of forces to Berlin in 1948–1949 to deter Soviet aggression as a successful example.
A 2022 study by Brian Blankenship and Erik Lin-Greenberg found that high-resolve, low-capability signals (such as tripwires) were not viewed as more reassuring to allies than low-resolve, high-capability alternatives (such as forces stationed offshore). Their study cast doubt on the reassuring value of tripwires.
Nuclear deterrence theory
In 1966, Schelling is prescriptive in outlining the impact of the development of nuclear weapons in the analysis of military power and deterrence. In his analysis, before the widespread use of assured second strike capability, or immediate reprisal, in the form of SSBN submarines, Schelling argues that nuclear weapons give nations the potential to destroy their enemies but also the rest of humanity without drawing immediate reprisal because of the lack of a conceivable defense system and the speed with which nuclear weapons can be deployed. A nation's credible threat of such severe damage empowers their deterrence policies and fuels political coercion and military deadlock, which can produce proxy warfare.
According to Kenneth Waltz, there are three requirements for successful nuclear deterrence:
Part of a state's nuclear arsenal must appear to be able to survive an attack by the adversary and be used for a retaliatory second strike
The state must not respond to false alarms of a strike by the adversary
The state must maintain command and control
The stability–instability paradox is a key concept in rational deterrence theory. It states that when two countries each have nuclear weapons, the probability of a direct war between them greatly decreases, but the probability of minor or indirect conflicts between them increases. This occurs because rational actors want to avoid nuclear wars, and thus they neither start major conflicts nor allow minor conflicts to escalate into major conflicts—thus making it safe to engage in minor conflicts. For instance, during the Cold War the United States and the Soviet Union never engaged each other in warfare, but fought proxy wars in Korea, Vietnam, Angola, the Middle East, Nicaragua and Afghanistan and spent substantial amounts of money and manpower on gaining relative influence over the third world.
Bernard Brodie wrote in 1959 that a credible nuclear deterrent must be always ready but never used.
Stages of US policy of deterrence
The US policy of deterrence during the Cold War underwent significant variations.
Containment
The early stages of the Cold War were generally characterized by the containment of communism, an aggressive stance on behalf of the US especially on developing nations under its sphere of influence. The period was characterized by numerous proxy wars throughout most of the globe, particularly Africa, Asia, Central America, and South America. One notable conflict was the Korean War. George F. Kennan, who is taken to be the founder of this policy in his Long Telegram, asserted that he never advocated military intervention, merely economic support, and that his ideas were misinterpreted as espoused by the general public.
Détente
With the S drawdown from Vietnam, the normalization of US relations with China, and the Sino-Soviet Split, the policy of containment was abandoned and a new policy of détente was established, with peaceful co-existence was sought between the United States and the Soviet Union. Although all of those factors contributed to this shift, the most important factor was probably the rough parity achieved in stockpiling nuclear weapons with the clear capability of mutual assured destruction (MAD). Therefore, the period of détente was characterized by a general reduction in the tension between the Soviet Union and the United States and a thawing of the Cold War, which lasted from the late 1960s until the start of the 1980s. The doctrine of mutual nuclear deterrence then characterized relations between the United States and the Soviet Union and relations with Russia until the onset of the New Cold War in the early 2010s. Since then, relations have been less clear.
Reagan era
A third shift occurred with US President Ronald Reagan's arms build-up during the 1980s. Reagan attempted to justify the policy by concerns of growing Soviet influence in Latin America and the regime in Iran, which was established after the Iranian Revolution of 1979. Similar to the old policy of containment, the US funded several proxy wars, including support for Saddam Hussein of Iraq during the Iran–Iraq War, support for the mujahideen in Afghanistan, who were fighting for independence from the Soviet Union, and several anticommunist movements in Latin America such as the overthrow of the Sandinista government in Nicaragua. The funding of the Contras in Nicaragua led to the Iran-Contra Affair, while overt support led to a ruling from the International Court of Justice against the United States in Nicaragua v. United States.
While the army was dealing with the breakup of the Soviet Union and the spread of nuclear technology to other nations beyond the United States and Russia, the concept of deterrence took on a broader multinational dimension. The US policy on deterrence after the Cold War was outlined in 1995 in the document called "Essentials of Post–Cold War Deterrence." It explains that while relations with Russia continue to follow the traditional characteristics of MAD, but the US policy of deterrence towards nations with minor nuclear capabilities should ensure by threats of immense retaliation (or even pre-emptive action) not to threaten the United States, its interests, or allies. The document explains that such threats must also be used to ensure that nations without nuclear technology refrain from developing nuclear weapons and that a universal ban precludes any nation from maintaining chemical or biological weapons. The current tensions with Iran and North Korea over their nuclear programs are caused partly by the continuation of the policy of deterrence.
Cyber deterrence
Since the early 2000s, there has been an increased focus on cyber deterrence. Cyber deterrence has two meanings:
The use of cyber actions to deter other states
The deterrence of an adversary's cyber operations
Scholars have debated how cyber capabilities alter traditional understandings of deterrence, given that it may be harder to attribute responsibility for cyber attacks, the barriers to entry may be lower, the risks and costs may be lower for actors who conduct cyber attacks, it may be harder to signal and interpret intentions, the advantage of offense over defense, and weak actors and non-state actors can develop considerable cyber capabilities. Scholars have also debated the feasibility of launching highly damaging cyber attacks and engaging in destructive cyber warfare, with most scholars expressing skepticism that cyber capabilities have enhanced the ability of states to launch highly destructive attacks. The most prominent cyber attack to date is the Stuxnet attack on Iran's nuclear program. By 2019, the only publicly acknowledged case of a cyber attack causing a power outage was the 2015 Ukraine power grid hack.
There are various ways to engage in cyber deterrence:
Denial: preventing adversaries from achieving military objectives by defending against them
Punishment: the imposition of costs on the adversary
Norms: the establishment and maintenance of norms that establish appropriate standards of behavior
Escalation: raising the probability that costs will be imposed on the adversary
Entanglement and interdependence: interdependence between actors can have a deterrent effect
There is a risk of unintended escalation in cyberspace due to difficulties in discerning the intent of attackers, and complexities in state-hacker relationships. According to political scientists Joseph Brown and Tanisha Fazal, states frequently neither confirm nor deny responsibility for cyber operations so that they can avoid the escalatory risks (that come with public credit) while also signaling that they have cyber capabilities and resolve (which can be achieved if intelligence agencies and governments believe they were responsible).
According to Lennart Maschmeyer, cyber weapons have limited coercive effectiveness due to a trilemma "whereby speed, intensity, and control are negatively correlated. These constraints pose a trilemma for actors because a gain in one variable tends to produce losses across the other two variables."
Intrawar deterrence
Intrawar deterrence is deterrence within a war context. It means that war has broken out but actors still seek to deter certain forms of behavior. In the words of Caitlin Talmadge, "intra-war deterrence failures... can be thought of as causing wars to get worse in some way." Examples of intrawar deterrence includes deterring adversaries not to resort to nuclear, chemical and biological weapons attacks or attacking civilian populations indiscriminately. Broadly, it involve any prevention of escalation.
Criticism
Deterrence theory has been criticized by numerous scholars for various reasons. A prominent strain of criticism argues that rational deterrence theory is contradicted by frequent deterrence failures, which may be attributed to misperceptions. Scholars have also argued that leaders do not behave in ways that are consistent with the predictions of nuclear deterrence theory.
It is argued that suicidal or psychotic opponents may not be deterred by either forms of deterrence. Also, diplomatic misunderstandings and/or opposing political ideologies may lead to escalating mutual perceptions of threat and a subsequent arms race that elevates the risk of actual war, a scenario illustrated in the movies WarGames (1983) and Dr. Strangelove (1964). An arms race is inefficient in its optimal output, as all countries involved expend resources on armaments that would not have been created if the others had not expended resources, a form of positive feedback. Besides, escalation of perceived threat can make it easier for certain measures to be inflicted on a population by its government, such as restrictions on civil liberties, the creation of a military–industrial complex, and military expenditures resulting in higher taxes and increasing budget deficits.
In recent years, many mainstream politicians, academic analysts, and retired military leaders have also criticized deterrence and advocated nuclear disarmament. Sam Nunn, William Perry, Henry Kissinger, and George Shultz have all called upon governments to embrace the vision of a world free of nuclear weapons, and in three Wall Street Journal op-eds proposed an ambitious program of urgent steps to that end. The four have created the Nuclear Security Project to advance that agenda. Organisations such as Global Zero, an international non-partisan group of 300 world leaders dedicated to achieving nuclear disarmament, have also been established. In 2010, the four were featured in a documentary film entitled Nuclear Tipping Point. The film is a visual and historical depiction of the ideas laid forth in the Wall Street Journal op-eds and reinforces their commitment to a world without nuclear weapons and the steps that can be taken to reach that goal.
Kissinger puts the new danger, which cannot be addressed by deterrence, this way: "The classical notion of deterrence was that there was some consequences before which aggressors and evildoers would recoil. In a world of suicide bombers, that calculation doesn't operate in any comparable way." Shultz said, "If you think of the people who are doing suicide attacks, and people like that get a nuclear weapon, they are almost by definition not deterrable."
As opposed to the extreme mutually assured destruction form of deterrence, the concept of minimum deterrence in which a state possesses no more nuclear weapons than is necessary to deter an adversary from attacking is presently the most common form of deterrence practiced by nuclear weapon states, such as China, India, Pakistan, Britain, and France. Pursuing minimal deterrence during arms negotiations between the United States and Russia allows each state to make nuclear stockpile reductions without the state becoming vulnerable, but it has been noted that there comes a point that further reductions may be undesirable, once minimal deterrence is reached, as further reductions beyond that point increase a state's vulnerability and provide an incentive for an adversary to expand its nuclear arsenal secretly.
"Senior European statesmen and women" called for further action in addressing problems of nuclear weapons proliferation in 2010: "Nuclear deterrence is a far less persuasive strategic response to a world of potential regional nuclear arms races and nuclear terrorism than it was to the cold war."
Paul Virilio criticized nuclear deterrence as anachronistic in the age of information warfare since disinformation and kompromat are the current threats to suggestible populations. He calls the wound inflicted on unsuspecting populations an "integral accident:
The first deterrence, nuclear deterrence, is presently being superseded by the second deterrence: a type of deterrence based on what I call 'the information bomb' associated with the new weaponry of information and communications technologies. Thus, in the very near future, and I stress this important point, it will no longer be war that is the continuation of politics by other means, it will be what I have dubbed 'the integral accident' that is the continuation of politics by other means.
A former deputy defense secretary and strategic arms treaty negotiator, Paul Nitze, stated in a Washington Post op-ed in 1994 that nuclear weapons were obsolete in the "new world disorder" after the dissolution of the Soviet Union, and he advocated reliance on precision guided munitions to secure a permanent military advantage over future adversaries.
In 2004, Frank C. Zagare made the case that deterrence theory is logically inconsistent, not empirically accurate, and that it is deficient as a theory. In place of classical deterrence, rational choice scholars have argued for perfect deterrence, which assumes that states may vary in their internal characteristics and especially in the credibility of their threats of retaliation.
In a January 2007 article in The Wall Street Journal, veteran Cold War policy makers Henry Kissinger, Bill Perry, George Shultz, and Sam Nunn reversed their previous position and asserted that far from making the world safer, nuclear weapons had become a source of extreme risk. Their rationale and conclusion was based not on the old world with only a few nuclear players but on the instability in many states with the technologies and the lack of wherewithal for the proper maintenance and upgrading of existing weapons:
According to The Economist, "Senior European statesmen and women" called for further action in 2010 in addressing problems of nuclear weapons proliferation: "Nuclear deterrence is a far less persuasive strategic response to a world of potential regional nuclear arms races and nuclear terrorism than it was to the cold war."<ref
name="Willoch_et_al"></ref>
Research has focused predominantly on the theory of rational deterrence to analyze the conditions under which conventional deterrence is likely to succeed or fail. Alternative theories, however, have challenged the rational deterrence theory and have focused on organizational theory and cognitive psychology.
See also
Balance of terror
Chainstore paradox
Confidence-building measures
Decapitation strike
International relations
Launch on warning
Long Peace
Metal Gear Solid: Peace Walker
N-deterrence
Nuclear blackmail
Nuclear ethics
Nuclear peace
Nuclear strategy
Nuclear terrorism
Nuclear warfare
Peace through strength
Prisoner's dilemma
Reagan Doctrine
Security dilemma
Tripwire force
Wargaming
Notes
References
Further reading
Schultz, George P. and Goodby, James E. The War that Must Never be Fought, Hoover Press, , 2015.
Freedman, Lawrence. 2004. Deterrence. New York: Polity Press.
Jervis, Robert, Richard N. Lebow and Janice G. Stein. 1985. Psychology and Deterrence. Baltimore: Johns Hopkins University Press. 270 pp.
Morgan, Patrick. 2003. Deterrence Now. New York: Cambridge University Press.
T.V. Paul, Patrick M. Morgan, James J. Wirtz, Complex Deterrence: Strategy In the Global Age (University of Chicago Press, 2009) .
Garcia Covarrubias, Jaime. "The Significance of Conventional Deterrence in Latin America", March–April 2004.
Waltz, Kenneth N. "Nuclear Myths and Political Realities". The American Political Science Review. Vol. 84, No. 3 (Sep, 1990), pp. 731–746.
External links
Nuclear Deterrence Theory and Nuclear Deterrence Myth, streaming video of a lecture by Professor John Vasquez, Program in Arms Control, Disarmament, and International Security (ACDIS), University of Illinois, September 17, 2009.
Deterrence Today – Roles, Challenges, and Responses, analysis by Lewis A. Dunn, IFRI Proliferation Papers n° 19, 2007
Revisiting Nuclear Deterrence Theory by Donald C. Whitmore – March 1, 1998
Nuclear Deterrence, Missile Defenses, and Global Instability by David Krieger, April 2001
Bibliography
Maintaining Nuclear Deterrence in the 21st Century by the Senate Republican Policy Committee
Nuclear Files.org Description and analysis of the nuclear deterrence theory
Nuclear Files.org Speech by US General Lee Butler in 1998 on the Risks of Nuclear Deterrence
Nuclear Files.org Speech by Sir Joseph Rotblat, Nobel Peace Laureate, on the Ethical Dimensions of Deterrence
The Universal Formula for Successful Deterrence by Charles Sutherland, 2007. A predictive tool for deterrence strategies.
Will the Eagle strangle the Dragon?, Analysis of how the Chinese nuclear deterrence is altered by the U.S. BMD system, Trends East Asia, No. 20, February 2008.
When is Deterrence Necessary? Gauging Adversary Intent by Gary Schaub,Jr., Strategic Studies Quarterly 3, 4 (Winter 2009)
The significance of conventional deterrence in Latin America
U.S. Nuclear Deterrence Policy United States Department of Defense
Cold War policies
Cold War terminology
Geopolitical terminology
International relations theory
International security
Military strategy
Nuclear strategy
Nuclear warfare |
3643928 | https://en.wikipedia.org/wiki/Brian%20Bannister | Brian Bannister | Brian Patrick Bannister (born February 28, 1981) is an American former professional baseball starting pitcher who played for the New York Mets and Kansas City Royals of Major League Baseball (MLB) from 2006 through 2010, and is currently the director of pitching with the San Francisco Giants. He played college baseball as a walk-on for the University of Southern California. Bannister was selected by the Mets in the seventh round of the 2003 MLB draft. He previously served as assistant pitching coach and vice president of pitching development for the Boston Red Sox.
Amateur career
Bannister was born in Scottsdale, Arizona. He had a remarkable high school career at Chaparral High School, former home of Chicago White Sox star Paul Konerko, as he was named All-Region and All-City in 1997, 1998 and 1999. Chaparral was the runner-up to the state title in 1997 and 1998, but in Bannister's senior year, he helped take home the state championship by striking out seven of the nine batters he faced in the championship game.
He began his college career as a walk-on at the University of Southern California. Entering as a second baseman, he became a full-time pitcher before the start of his freshman season. He posted an ERA of 4.35 in ten games out of the bullpen in his freshman year. Acting as the team closer during his 2001 sophomore campaign, he compiled a 2.80 ERA in thirty-five relief appearances. Bannister helped the Trojans to the College World Series in both 2000 and 2001 while pitching alongside former Major Leaguers Mark Prior and Anthony Reyes. After the 2001 season, he played collegiate summer baseball with the Brewster Whitecaps of the Cape Cod Baseball League. He redshirted in 2002, due to arthroscopic elbow surgery to remove impinged scar tissue in his elbow. He was drafted by the Boston Red Sox in 2002, but did not sign. He returned to the Trojans in 2003 to play his junior year, which was also his first year as a starter. In eighteen games (fourteen starts), Bannister compiled a 6–5 record with an ERA of 4.53.
Professional career
Bannister was drafted by the Mets in the seventh round of the 2003 amateur draft and, after signing, was assigned to the Class-A Brooklyn Cyclones. There he put together a strong season, posting a 4–1 record with an ERA of 2.15 in twelve games (nine starts) and was named a New York–Penn League Postseason All-Star. In 2004, Bannister was assigned to play for High-A St. Lucie in the Florida State League, where he put together a 5–7 record with a 4.24 ERA in twenty starts and was a Florida State League All-Star. His experimentation with throwing a two-seam fastball and circle changeup led to this decline in numbers, but prepared him for the competition at higher levels of professional baseball. Bannister was then promoted to AA Binghamton following the trade of Scott Kazmir to the Tampa Bay Devil Rays, where he had a 3–3 record and an ERA of 4.08 in a mere eight starts. After the 2004 season, Bannister played for the Peoria Saguaros of the Arizona Fall League. He posted strong numbers, going 2–0 with a 3.77 ERA against the top prospects in the minor leagues. More importantly, he developed his cut fastball while in the AFL, which would develop into one of his strongest pitches. The next year, Bannister began the 2005 season in Double-A Binghamton, where he posted numbers that reflected the quality of his newly developed pitches: a 9–4 record with a 2.56 ERA in eighteen starts. This earned him an All-Star selection for the third consecutive season, and the honor of starting pitcher for the Double-A All-Star Game. This display caused Bannister to earn a promotion to AAA Norfolk, where he showed further promise against better competition. He finished his AAA campaign with a 4–1 record and an ERA of 3.18 in eight starts.
2006
At 25 years old, Bannister made his Major League debut against the Washington Nationals. His first major league win came in his second start, also against Washington, on April 11, 2006. A former second baseman, Bannister also excelled at the plate, acquiring four hits in his first ten at-bats, including three doubles.
After making five starts, Bannister was put on the 15-day disabled list with a strained right hamstring which he injured while running the bases in the fifth inning of an April 26 game against San Francisco. Bannister was later moved to the 60-day DL. Bannister made 5 starts for the Mets and had a record of 2–0 with a 2.89 ERA.
Bannister spent a month on a Minor League rehab assignment, pitching for the St. Lucie Mets and the Norfolk Tides. When Orlando Hernández was unable to pitch in late August, Bannister made a spot start against the Phillies, giving up 4 runs in 6 innings in a 4–3 loss. The game was Bannister's first major league defeat. Immediately after the game, Bannister was optioned to AAA Norfolk to allow Óliver Pérez to make a spot start the following day. Bannister returned to the Mets for the month of September and made two relief appearances. On September 6, 2006, the Brooklyn Cyclones honored Bannister with his own bobblehead and retired his number, 19. Bannister was the first pitcher from the Cyclones to make his Major League debut with the Mets.
Because his hamstring injury reduced the number of innings pitched in 2006, Bannister joined the Tomateros de Culiacán in the Mexican Pacific League. He won in his debut, pitching five innings against the Algodoneros de Guasave. After completing the first half of the season with the Tomateros, Bannister returned home with a 3–2 record and a 3.68 ERA.
On December 5, 2006, during the MLB Winter Meetings, Bannister was traded from the New York Mets to the Kansas City Royals for relief pitcher Ambiorix Burgos.
2007
In spring 2007, Bannister's high school jersey number, 15, was retired alongside former Chaparral High School players Darryl Deak, Brian Deak, coach Mark Miller, and former Chicago White Sox star Paul Konerko.
On April 24, 2007, Bannister made his debut with the Royals against the Chicago White Sox. He gave up 4 runs, 3 of them earned runs, in 4 and 1/3 innings, and was not involved in the decision.
In June 2007, Bannister was one of two major league pitchers to win 5 games, going 5–1 with a 2.75 ERA in six starts, including a streak of 18 innings without an earned run, and was named AL Rookie of the Month. He also received the same award in August after winning 4 games in the month.
On August 16, 2007, Bannister threw his first career complete game, a four-hitter against the A's. He threw 111 pitches, 73 for strikes.
Bannister finished 3rd in the 2007 American League Rookie of the Year voting, after finishing 12–9 with a 3.87 ERA. He received 1 first place vote, 8 second place votes, and 7 third place votes.
Bannister was selected to the 2007 Topps Major League Rookie All-Star Team. The selection was the result of the 49th annual Topps balloting of Major League managers.
2008
In 2008, he was named the Royals' number-2 starter behind Gil Meche. He led the majors in grand slams allowed, with four. He regressed from his 2007 form, and saw his ERA spike from 3.87 to 5.76. He finished with among the highest loss totals for starting pitchers that year, finishing 9–16. On 8/17/2008 against The Yankees, Bannister gave up 10 earned runs on 10 hits (3 homeruns) and 3 walks while pitching only one complete inning. In this year, his daughter Brynn was born on October 11.
2009
After a poor 2008, Bannister started the 2009 season with AAA Omaha before promptly being called up a week later to fill the number 5 spot in the rotation. After adding a new changeup to his repertoire, his ground ball rate increased and he had a 7–7 record and a 3.59 ERA into early August, ranking in the Top 10 in the American League in ERA and being the subject of numerous trade deadline rumors. Unfortunately, he suffered a season-ending right rotator cuff tear in a 117 pitch game August 3 against Tampa Bay. After attempting to pitch with the injury and losing five consecutive starts, he was placed on the disabled list for the rest of the season. He finished the year compiling a record of 7–12 with a 4.73 ERA.
2010
Bannister spent the winter rehabilitating his shoulder and returned to what would be his last MLB season. His ineffectiveness was obvious, and he struggled with giving up home runs and pitching deep into games. On June 23, 2010, Bannister and the Royals handed Washington Nationals phenom Stephen Strasburg his first career loss in a 1–0 victory in Washington. His season finished with a miserable loss at Cincinnati, a minor league rehab stint, and a loss to the Minnesota Twins.
In 2010, he was chosen as "honorable mention" in a list of the smartest athletes in sports by Sporting News.
2011
In January 2011, Bannister signed a two-year contract to play for Japan's Yomiuri Giants. In March, Bannister retired following the earthquake and tsunami in Northern Japan, stating he had no further plans to play in either Japan or the United States. He currently runs a fund that supports non-profit organizations for families in crisis in the San Francisco Bay Area.
Post-playing career
During his pitching career, Bannister became known in baseball for his interest in scouting and player analysis and evaluation. He became interested in statistical analysis and Sabermetrics such as FIP and UZR as means of determining a player's true value. On January 13, 2015, Bannister joined the Boston Red Sox as a member of its professional scouting department. On September 9, 2015, he was promoted by Red Sox president of baseball operations Dave Dombrowski to a new position, director of pitching analysis and development. On July 6, 2016, Bannister was promoted to assistant pitching coach. On November 3, 2016, he was promoted to vice president of pitching development, in addition to his role as assistant pitching coach. After the 2019 season, he was taken off the coaching staff while remaining as vice president of pitching development, until leaving the Red Sox to take a position with the San Francisco Giants in December 2019. His title with the Giants is director of pitching, and he is listed as a member of the team's uniformed coaching staff.
Photography career
Bannister is an avid photographer and photography supporter. He is the founder of a full-service photography studio complex and equipment rental house in Phoenix, Arizona. He graduated cum laude from the University of Southern California with a Bachelor of Arts degree from the School of Fine Arts. His work has been featured in The New York Times, New York Daily News, and American Photo.
Personal
Bannister is married and has one daughter, Brynn, who was born in the 2008 offseason. In 2011, his son Atley was born in December. He is a devout Christian and the oldest son of former Major League All-Star pitcher Floyd Bannister, who pitched from 1977 to 1992 with Houston, Seattle, Chicago (AL), Kansas City, California, and Texas. His uncle, Greg Cochran, also played in the Yankees' and Athletics' minor league systems. His brother Brett spent time as a pitcher in the Mariners' system, and his brother Cory pitched at Stanford. Brian and Brett are both members of Lambda Chi Alpha fraternity.
See also
List of second-generation Major League Baseball players
References
External links
Brian Bannister verified Twitter account
1981 births
Living people
Baseball players from Scottsdale, Arizona
Baseball players from Phoenix, Arizona
Binghamton Mets players
Boston Red Sox coaches
Boston Red Sox scouts
Brewster Whitecaps players
Brooklyn Cyclones players
Kansas City Royals players
Major League Baseball pitchers
Major League Baseball pitching coaches
New York Mets players
Norfolk Tides players
Omaha Royals players
St. Lucie Mets players
USC Trojans baseball players |
65764627 | https://en.wikipedia.org/wiki/Cybersecurity%20Law%20of%20the%20People%27s%20Republic%20of%20China | Cybersecurity Law of the People's Republic of China | The Cybersecurity Law of the People's Republic of China, (Chinese: 中华人民共和国网络安全法) commonly referred to as the Chinese Cybersecurity Law, was enacted by the National People’s Congress with the aim of increasing data protection, data localization, and cybersecurity ostensibly in the interest of national security. The law is part of wider series of laws passed by the Chinese government in an effort to strengthen national security legislation. Examples of which since 2014 have included a Law on National Intelligence, the National Security of the People’s Republic of China (not to be confused with the Hong Kong National Security Law) and laws on counter-terrorism and foreign NGO management, all passed within successive short timeframes of each other.
History
This law was enacted by the Standing Committee of the National People's Congress on November 7, 2016, and was implemented on June 1, 2017. It requires network operators to store select data within China and allows Chinese authorities to conduct spot-checks on a company's network operations.
Cybersecurity is recognized as a basic law. This puts the law on the top of the pyramid-structured legislation on cybersecurity. The law is an evolution of the previously existent cybersecurity rules and regulations from various levels and fields, assimilating them to create a structured law at the macro-level. The law also offers principal norms on certain issues that are not immediately urgent but are of long-term importance. These norms will serve as a legal reference when new issues arise.
Provisions
The law:
Created the principle of cyberspace sovereignty
Defined the security obligations of internet products and services providers
Detailed the security obligations of internet service providers.
Further refined rules surrounding personal information protection
Established a security system for key information infrastructure
Instituted rules for the transnational transmission of data from critical information infrastructures.
The cybersecurity law is applicable to network operators and businesses in “critical sectors.” By critical sectors, China roughly divides the domestic businesses into networking businesses that are involved in telecommunications, information services, energy transport, water, financial services, public services, and electronic government services. Some of the most controversial sections of the law include articles 28, 35, and 37.
Article 28 compels vaguely defined "network operators", (interpreted to include: social media platforms, application creators and other technology companies), to cooperate with public security organs such as the Ministry of Public Security and hand over information when requested.
Article 35 is targeted at purchases of foreign software or hardware by government agencies or other "critical information infrastructure operators", requiring any hardware of software purchased to undergo review by agencies such as China's SCA or State Cryptography Administration, potentially involving the provision source codes and other sensitive proprietary information to government agencies paving the way state theft of intellectual property or transmission to domestic competitors. Above all, the article creates further regulatory burdens for foreign technology companies operating in China, indirectly creating a more favourable playing field for domestic competitors which would naturally be more prepared to comply with the regulations.
Article 37 creates the requirement of data localisation, meaning that foreign technology companies such as Microsoft, Apple and PayPal operating in the Chinese market are obligated to store Chinese user data on Chinese servers in mainland China providing an easier access route for Chinese intelligence and state security agencies to intercept data and communications, while expanding the power of the ruling Chinese Communist Party to target dissent and surveil citizens.
The law is applicable to all businesses in China that manage their own servers or other data networks. Network operators are expected, among other things, to clarify cybersecurity responsibilities within their organization, take technical measures to safeguard network operations, prevent data leaks and theft, and report any cybersecurity incidents to both users of the network and the relevant implementing department for that sector.
The law is composed of supportive subdivisions of regulations that specify the purpose of it. For instance, the Core Infrastructure Initiative (CII) Security Protection Regulations and Measures for Security Assessment of Cross-border Transfer of Personal Information and Important Data. However, the law is yet to be set in stone since China's government authorities are occupied with defining more contingent laws to better correspond with the cybersecurity law. By incorporating preexisting laws on VPN and data security into the cybersecurity law, the Chinese government reinforces its control in addition to emphasize has the need for foreign companies to comply with domestic regulations.
The cybersecurity law also provides regulations and definitions on legal liability. For different types of illegal conduct, the law sets a variety of punishments, such as fines, suspension for rectification, revocation of permits and business licenses, and others. The Law accordingly grant cybersecurity and administration authorities with rights and guidelines to carry out law enforcement on illegal acts.
Although censorship affects mainland China, Hong Kong and Macau are exempt under the principle of “one country two systems” and the maintenance of separate and independent legal systems.
Related Regulations
In July 2021, the Cyberspace Administration of China issued “Regulations on the Management of Security Vulnerabilities in Network Products” requiring that all vulnerabilities be reported to the Ministry of Industry and Information Technology (MIIT) and prohibits the public disclosure of vulnerabilities, including to overseas organizations.
Reactions
Along with the Great Firewall, restrictions stipulated in the law have raised concerns, especially from foreign technology companies operating in China. Regarding the requirements for spot-checks and certifications, international law firms have warned that companies could be asked to provide source code, encryption, or other crucial information for review by the authorities, increasing the risk of intellectual property theft, information being lost, passed on to local competitors, or being used by the authorities themselves. The Federal Bureau of Investigation warned that the law could force companies transmitting data through servers in China to submit to data surveillance and espionage.
The law sparked concerns both domestically and internationally due to its phrasing and specific requirements. Foreign companies and businesses in China expressed concerns that this law might impede future investments in China, since the law requires them to "store their data on Chinese-law regulated local servers, and cooperate with Chinese national security agencies". Potentially increasing the risk of intellectual property theft and lost of trade secrets in the process.
Since its inception many foreign technology companies have already complied with the law. Apple for example, announced in 2017 that it would invest $1 billion in partnership with local cloud computing company Guizhou Cloud Big Data or GCBD to construct a new data center located in China's Guizhou province for the purposes of compliance. Simultaneously, the company also announced that it would transfer the operation and storage of iCloud data to Mainland China. Microsoft also announced an expansion of its Azure services in partnership cloud computing company 21Vianet through investment in more servers. Meanwhile, online services, such as Skype and WhatsApp which refused to store their data locally and were either delisted from domestic app stores or restricted from further expansion.
Article 9 of the cybersecurity law states that: “network operators … must obey social norms and commercial ethics, be honest and credible, perform obligations to protect network security, accept supervision from the government and public, and bear social responsibility.” Although some arguments and doubts arise, such a vague provisions are widely suspected to increase the government's scope to interpret and assert the need to intervene in business operations and restrict the free flow of information and speech. Such interventions would include investigations which could disperse into government trade associations requesting spot-checks foreign firms. Among other things the law further signals the determination of the Chinese government in strengthening its control over data and technology companies.
The law forces foreign technology and other companies operating within China to either invest in new server infrastructure in order to comply with the law or partner with service providers such as Huawei, Tencent, or Alibaba, which have already have server infrastructure on the ground, saving capital expenditure costs for companies. The law is widely seen to be in line with 12th Five-Year Plan (2011-2015) which aims to create domestic champions in industries such as cloud computing and big data processing. The law is seen as a boon to domestic companies and has been criticized as creating an unfair playing ground against international technology companies such as Microsoft and Google.
Supporters of the law have stated that the intention of the law is not to prohibit foreign businesses from operating in China, or boost domestic Chinese competitiveness. A study by Matthias Bauer and Hosuk Lee-Makiyama in 2015, states that data localization causes minor damage to economic growth due to inefficiencies that arise from data transfer processes and the duplication of data between several jurisdictions. The requirement for data localization is also seen as a move by Beijing to bring data under Chinese jurisdiction and make it easier to prosecute entities seen as violating China's internet laws.
The president of AmCham South China, Harley Seyedin, claimed that foreign firms are facing “mass concerns” because the law has greatly increased operating costs and has had a big impact on how business is done in China. More specifically, he stated that the cyber security law continues to create “uncertainties within the investment community, and it’s resulting in, at the minimum, postponement of some R&D investment.”
The law was widely criticized for limiting freedom of speech. For example, the law explicitly requires most online services operating in China to collect and verify the identity of their users, and, when required to, surrender such information to law enforcement without warrant. Activists have argued this policy dissuades people from freely expressing their thoughts online, further stifling dissent by making it easier to target and surveil dissidents.
See also
Data Security Law of the People's Republic of China
Personal Information Protection Law of the People's Republic of China
List of statutes of China
Law of the People's Republic of China
Chinese cyberwarfare
References
Chinese law
Cyberwarfare in China
Mass surveillance
2017 in China
2017 in law |
5679227 | https://en.wikipedia.org/wiki/EPAS | EPAS | EPAS (Electronic Protocols Application Software) is a non-commercial cooperation initiative launched in Europe which aims at developing a series of data protocols to be applied in a point of interaction (POI) environment.
The project intends to address the three following protocols :
a terminal management protocol;
a retailer application protocol;
an acquirer protocol.
The main objectives common to the three protocols are:
protocol interoperability : each protocol is designed in such a way as to be independent of the external device and the POI;
independence of the system architecture and the integration level of the POI within the retailer application protocol;
independence of the communication support and low level protocols : each protocol is independent of the network connection and will address both wire and wireless connections.
Context
A current barrier to the development of the POI market is due to the existing fragmented market for this type of equipment, especially in Europe, where each country has adopted its own requirements and rules in terms of security and functions to be implemented in POI devices.
Today's situation is the following one :
card accepting devices from one country cannot be replaced by similar devices of another country due to different – incompatible - protocols for download, key management, communication with cash registers requiring specific software modules for each country of operation;
card accepting devices from one country cannot process transactions issued by acquirers from another country due to the incompatibility of the protocols between the two countries;
different proprietary implementation of existing ISO protocols in Europe hampers the development of central acquiring activities in Europe.
The goal of the EPAS project is therefore the issuance of technical specifications and the development of open software for three major protocols to be used in a Point of Interaction environment. The protocols enable a POI to communicate with external devices and hosts. An additional aim of the project is to validate - through demonstrators - the technical feasibility of the three protocols developed in the framework of the project.
The project is to be considered as a cornerstone of another initiative ("ERIDANE") initiated at the European level with several partners belonging to the EPAS Consortium. The aim of the ERIDANE project is to achieve a common set of standards for hardware and software components to be used in point-of-sales retail environments. As such, EPAS complements adequately the work carried out by the ERIDANE project which essentially focuses on the inner structure of a POI terminal used at retailers' point of sales locations.
With the development of specifications, the project intends to issue a new generation of open standards to be internationally deployed by equipment manufacturers, with a strong support from the banking and card payment industries as well as retailers, solutions providers and users.
The consortium is made of large industrial organisations, small and medium enterprises, card payment organisations, solution providers, retailers and users, as well as an academic institution, all having an in-depth expertise in the activities to be carried out in the framework of the project.
The partnership is composed of organisations belonging to Austria, France, Germany, Italy, Belgium, the Netherlands, Luxembourg, Spain, Portugal, the United Kingdom and Nordic countries.
Objectives of EPAS
The outcome of the EPAS initiative will enable to achieve interoperability by :
giving to manufacturers a technological advance vis-à-vis their non-European competitors ;
ensuring the interoperability of protocols at a European level ;
improving the security level of the protocols.
One of the objective of the European Commission in contributing to the building of the European Union is the creation of a Single European Market ensuring a free circulation of goods and persons as it is the case today for most national – domestic – markets in Europe. In order to anticipate undue legislation, partners in the consortium have come together to develop and disseminate a set of data protocols which would complement the business standards needed to achieve the necessary Single European Payments Area Standards.
The development of standards to create a large internal market of financial services has, however, been largely endorsed, not only by the banking industry, but also by solution providers, manufacturers, retailers and users.
The EPAS project foresees the development and provision of the missing links mentioned above in the creation of a unified market of electronic payments services by 2010.
The EPAS project intends to bring a major benefit to the POS market by eliminating existing barriers and by allowing applications to be developed and used for both national and international markets reducing to a large extent the investments to be carried out by all actors involved.
The EPAS project aims at addressing such technical bottlenecks by delivering state-of-the-art data protocols in order to ensure a smooth process of POS transactions in a forthcoming Europe-wide domestic market.
Project Development
The proposed initiative will be structured along the three following main phases :
Phase I : development of technical specifications and issuance of standards (2006 - mid-2007)
Phase II : development of software and provision of test tools (2007 – 2008)
Phase III : construction of demonstrators (2008)
The EPAS project will be conducted in line with the strategic objectives of the EPC (European Payments Council).
Project Evolution
Since 2014, the EPAS project has been taken over by nexo-standards.
The 3 protocols developed as ISO 20022 protocols are freely available on the ISO20022 web site and Message Usage Guide and additional information are freely available on the nexo-standards Official Web site.
Participants
The EPAS Consortium is composed of 24 organisations, each of them actively involved in its respective domain of expertise (card payment schemes, manufacturers, service companies, software developers, retailers).
The participating organisations are :
Ingenico (FR)
VeriFone (US)
The Logic Group (UK)
Amadis (CA)
ELITT (FR)
MoneyLine (FR)
Lyra Network (FR)
Atos Worldline (DE)
Wincor Nixdorf (ES)
GIE – Groupement des Cartes Bancaires "CB" (FR) (Co-ordinator)
Desjardins (CA)
Atos Worldline (BE)
Security Research and Consulting (SRC) GmbH (DE)
Equens SE (NL)
Sermepa (ES)
Cetrel (LU)
Total (FR)
Quercia (IT)
University of Applied Sciences, Cologne (DE)
Integri (BE)
PAN Nordic Card Association (PNC) (SE)
GALITT (FR)
BP (GB)
RSC Commercial Services (DE)
Europay Austria Zahlungsverkehrssysteme GmbH (AT)
SIBS (PT)
Thales e-Transactions España (ES)
See also
EFTPOS
Open Payment Initiative
Wire transfer
Electronic funds transfer
ERIDANE
External links
Official Web site
Sources
“Standardisierungsarbeiten im europäischen Zahlungsverkehr - Chancen für SEPA” SRC - Security Research & Consulting GmbH, Bonn - Wiesbaden, Germany, 2006, p. 5, 11 (PDF-transparencies)
William Vanobberghen, „Le Projet EPAS - Sécurité, protection des personnes et des donnée: de nouvelles technologies et des standards pour fiabiliser le contrôle et l’identification“, Groupement des Cartes Bancaires, 27. June 2006 (PPT-transparencies)
Hans-Rainer Frank, „SEPA aus Sicht eines europäischen Tankstellenbetreibers“, Arbeitskreis ePayment, Brussels, 11.May 2006, p. 11 (PDF-transparencies)
GROUPEMENT DES CARTES BANCAIRES, „EUROPEAN STANDARDISATION FOR ELECTRONIC PAYMENTS“,Used to be at: https://web.archive.org/web/20070927174537/http://www.cartes-bancaires.com/en/dossiers/standard.html (dead link as of Okt 2011)
„EPC Card Fraud Prevention & Security Activities“,Cédric Sarazin – Chairman Card Fraud Prevention TF 19. December 2007, FPEG Meeting - Brussels, https://web.archive.org/web/20121024081807/http://ec.europa.eu/internal_market/fpeg/docs/sarazin_en.ppt
"EPAS Members", https://web.archive.org/web/20161220082713/http://nexo-standards.org/members
References
Retail point of sale systems
Payment systems
Banking terms |
34249411 | https://en.wikipedia.org/wiki/Bring%20your%20own%20device | Bring your own device | Bring your own device (BYOD )—also called bring your own technology (BYOT), bring your own phone (BYOP), and bring your own personal computer (BYOPC)—refers to being allowed to use one's personally owned device, rather than being required to use an officially provided device.
There are two major contexts in which this term is used. One is in the mobile phone industry, where it refers to carriers allowing customers to activate their existing phone (or other cellular device) on the network, rather than being forced to buy a new device from the carrier.
The other, and the main focus of this article, is in the workplace, where it refers to a policy of permitting employees to bring personally owned devices (laptops, tablets, smartphones, etc.) to work, and to use those devices to access privileged company information and applications. This phenomenon is commonly referred to as IT consumerization.
BYOD is making significant inroads in the business world, with about 75% of employees in high-growth markets such as Brazil and Russia and 44% in developed markets already using their own technology at work. Surveys have indicated that businesses are unable to stop employees from bringing personal devices into the workplace. Research is divided on benefits. One survey shows around 95% of employees stating they use at least one personal device for work.
History
The term was initially used by a VoIP service provider BroadVoice in 2004 (initially for AstriCon, but then continued as a core part of the business model) with a service allowing businesses to bring their own device for a more open service provider model. The phrase and the "BYOD" acronym is a take-off on "BYOB", a party invitation term first recorded in the 1970s, standing for "bring your own beer/booze/bottle".
The term BYOD then entered common use in 2009, courtesy of Intel, when it recognized an increasing tendency among its employees to bring their own smartphones, tablets and laptop computers to work and connect them to the corporate network. However, it took until early 2011 before the term achieved prominence, when IT services provider Unisys and software vendor Citrix Systems started to share their perceptions of this emergent trend. BYOD has been characterized as a feature of the "consumer enterprise" in which enterprises blend with consumers. This is a role reversal in that businesses used to be the driving force behind consumer technology innovations and trends.
In 2012, the U.S. Equal Employment Opportunity Commission adopted a BYOD policy, but many employees continued to use their government-issued BlackBerrys because of concerns about billing, and the lack of alternative devices.
New trends
The proliferation of devices such as tablets and smartphones, now used by many people in their daily lives, has led to a number of companies, such as IBM, to allow employees to bring their own devices to work, due to perceived productivity gains and cost savings. The idea was initially rejected because of security concerns but more and more companies are now looking to incorporate BYOD policies.
According to a 2018 study, only 17 percent of enterprises provide mobile phones to all employees, while 31 percent provide to none and instead rely entirely on BYOD. The remaining 52 percent have some kind of hybrid approach where some employees receive corporate mobile phones and others are expected to bring their own.
Prevalence
The Middle East has one of the highest adoption rates (about 80%) of the practice worldwide in 2012.
According to research by Logicalis, high-growth markets (including Brazil, Russia, India, UAE, and Malaysia) demonstrate a much higher propensity to use their own device at work. Almost 75% of users in these countries did so, compared to 44% in the more mature developed markets.
In the UK, the CIPD Employee Outlook Survey 2013 revealed substantial variations by industry in the prevalence of BYOD.
Advantages
While some reports have indicated productivity gains by employees, the results have drawn skepticism. Companies such as Workspot believe that BYOD may help employees be more productive. Others say that using their own devices increases employee morale and convenience and makes the company look like a flexible and attractive employer. Many feel that BYOD can even be a means to attract new hires, pointing to a survey that indicating that 44% of job seekers view an organization more positively if it supports their device.
Some industries are adopting BYOD more quickly than others. A recent study by Cisco partners of BYOD practices found that the education industry has the highest percentage of people using BYOD for work, at 95.25%.
A study by IBM says that 82% of employees think that smartphones play a critical role in business. The study also suggests that the benefits of BYOD include increased productivity, employee satisfaction, and cost savings for the company. Increased productivity comes from a user being more comfortable with their personal device; being an expert user makes navigating the device easier, increasing productivity. Additionally, personal devices are often more up-to-date, as the devices may be renewed more frequently. BYOD increases employee satisfaction and job satisfaction, as the user can use the device they have selected as their own rather than one selected by the IT team. It also allows them to carry one device rather than one for work and one for personal use. The company can save money as they are not responsible for furnishing the employee with a device, though this is not guaranteed.
Disadvantages
Although the ability of staff to work at any time from anywhere and on any device provides real business benefits, it also brings significant risks. Companies must deploy security measures to prevent information ending up in the wrong hands. According to an IDG survey, more than half of 1,600 senior IT security and technology purchase decision-makers reported serious violations of personal mobile device use.
Various risks arise from BYOD, and agencies such as the UK Fraud Advisory Panel encourage organisations to consider these and adopt a BYOD policy.
BYOD security relates strongly to the end node problem, whereby a device is used to access both sensitive and risky networks and services; risk-averse organizations issue devices specifically for Internet use (termed Inverse-BYOD).
BYOD has resulted in data breaches. For example, if an employee uses a smartphone to access the company network and then loses that phone, untrusted parties could retrieve any unsecured data on the phone. Another type of security breach occurs when an employee leaves the company; they do not have to give back the device, so company applications and other data may still be present on their device.
Furthermore, people may sell their devices and forget to wipe sensitive information before the handover. Family members may share devices such as tablets; a child could play games on a parent's tablet and accidentally share sensitive content via email or other means such as Dropbox.
IT security departments wishing to monitor usage of personal devices must ensure that they monitor only activities that are work-related or access company data or information.
Organizations adopting a BYOD policy must also consider how they will ensure that the devices which connect to the organisation's network infrastructure to access sensitive information will be protected from malware. Traditionally if the device was owned by the organisation, the organisation can dictate for what purposes the device may be used or what public sites may be accessed from the device. An organisation can typically expect users to use their own devices to connect to the Internet from private or public locations. The users could be susceptible from attacks originating from untethered browsing or could potentially access less secure or compromised sites that may contain harmful material and compromise the security of the device.
Software developers and device manufacturers constantly release security patches to counteract threats from malware. IT departments that support organisations with a BYOD policy must have systems and processes to apply patches protecting systems against known vulnerabilities of the devices that users may use. Ideally, such departments should have agile systems that can quickly adopt the support necessary for new devices. Supporting a broad range of devices obviously carries a large administrative overhead. Organisations without a BYOD policy have the benefit of selecting a small number of devices to support, while organisations with a BYOD policy could also limit the number of supported devices, though this could defeat the objective of allowing users the freedom to choose their preferred device freely.
Several market and policies have emerged to address BYOD security concerns, including mobile device management (MDM), containerization and app virtualization. While MDM allows organizations to control applications and content on the device, research has revealed controversy related to employee privacy and usability issues that lead to resistance in some organizations. Corporate liability issues have also emerged when businesses wipe devices after employees leave the organization.
A key issue of BYOD which is often overlooked is BYOD's phone number problem, which raises the question of the ownership of the phone number. The issue becomes apparent when employees in sales or other customer-facing roles leave the company and take their phone number with them. Customers calling the number will then potentially be calling competitors, which can lead to loss of business for BYOD enterprises.
International research reveals that only 20% of employees have signed a BYOD policy.
It is more difficult for the firm to manage and control the consumer technologies and make sure they serve the needs of the business. Firms need an efficient inventory management system that keeps track of the devices employees are using, where the device is located, whether it is being used, and what software it is equipped with. If sensitive, classified, or criminal data lands on a U.S. government employee's device, the device is subject to confiscation.
Another important issue with BYOD is of scalability and capability. Many organisations lack proper network infrastructure to handle the large traffic generated when employees use different devices at the same time. Nowadays, employees use mobile devices as their primary devices and they demand performance which they are accustomed to. Earlier smartphones used modest amounts of data that were easily handled by wireless LANs, but modern smartphones can access webpages as quickly as most PCs do and may use radio and voice at high bandwidths, increasing demand on WLAN infrastructure.
Finally, there is confusion regarding the reimbursement for the use of a personal device. A recent court ruling in California indicates the need of reimbursement if an employee is required to use their personal device for work. In other cases, companies can have trouble navigating the tax implications of reimbursement and the best practices surrounding reimbursement for personal device use. A 2018 study found that 89 percent of organizations with a BYOD policy provide a full or partial stipend to compensate employees for their mobile phone expenses. On average, these organizations paid employees $36 per month as a BYOD stipend.
Personally owned, company enabled (POCE)
A personally owned device is any technology device that was purchased by an individual and was not issued by the agency. A personal device includes any portable technology such as cameras, USB flash drives, mobile wireless devices, tablets, laptops or personal desktop computers.
Corporate-owned, personally enabled (COPE)
As part of enterprise mobility, an alternative approach are corporate-owned, personally enabled devices (COPE). Under such policies, the company purchases and provides devices to their employees, but the functionality of a private device is enabled to allow personal usage. The company maintains all of these devices similarly to simplify its IT management; the organization will have permission to delete all data on the device remotely without incurring penalties and without violating the privacy of its employees.
BYOD policy
A BYOD policy must be created based on the company's requirements. BYOD can be dangerous to organizations, as mobile devices may carry malware. If an infected device connects to the company network, data breaches may occur. If a mobile device has access to business computing systems, the company's IT administrator should have control over it. A BYOD policy helps eliminate the risk of having malware in the network, as the management team can monitor all contents of the device and erase data if any suspicious event is captured. BYOD policies may specify that the company is responsible for any devices connected to a company network.
Additional policies
BYOD policies can vary greatly from organization to organization depending on the concerns, risks, threats, and culture, so differ in the level of flexibility given to employees to select device types. Some policies dictate a narrow range of devices; others allow a broader range of devices. Related to this, policies can be structured to prevent IT from having an unmanageable number of different device types to support. It is also important to state clearly which areas of service and support are the employees' responsibilities versus the company's responsibility.
BYOD users may get help paying for their data plans with a stipend from their company. The policy may also specify whether an employee is paid overtime for answering phone calls or checking email after hours or on weekends. Additional policy aspects may include how to authorize use, prohibited use, perform systems management, handle policy violations, and handle liability issues.
For consistency and clarity, BYOD policy should be integrated with the overall security policy and the acceptable use policy. To help ensure policy compliance and understanding, a user communication and training process should be in place and ongoing.
See also
Bring your own encryption
Bring your own operating system
Mobile security
One to one computing
Remote mobile virtualization
References
Mobile phones
Mobile computers
Mobile telecommunication services
Mobile device management |
38437140 | https://en.wikipedia.org/wiki/List%20of%20RNA-Seq%20bioinformatics%20tools | List of RNA-Seq bioinformatics tools | RNA-Seq is a technique that allows transcriptome studies (see also Transcriptomics technologies) based on next-generation sequencing technologies. This technique is largely dependent on bioinformatics tools developed to support the different steps of the process. Here are listed some of the principal tools commonly employed and links to some important web resources.
Design
Design is a fundamental step of a particular RNA-Seq experiment. Some important questions like sequencing depth/coverage or how many biological or technical replicates must be carefully considered. Design review.
PROPER: PROspective Power Evaluation for RNAseq.
RNAtor: an Android Application to calculate optimal parameters for popular tools and kits available for DNA sequencing projects.
Scotty: a web tool for designing RNA-Seq experiments to measure differential gene expression.
ssizeRNA Sample Size Calculation for RNA-Seq Experimental Design.
Quality control, trimming, error correction and pre-processing of data
Quality assessment of raw data is the first step of the bioinformatics pipeline of RNA-Seq. Often, is necessary to filter data, removing low quality sequences or bases (trimming), adapters, contaminations, overrepresented sequences or correcting errors to assure a coherent final result.
Quality control
AfterQC - Automatic Filtering, Trimming, Error Removing and Quality Control for fastq data.
bam-lorenz-coverage A tool that can generate Lorenz plots and Coverage plots, or export these statistics to text files, directly from BAM file(s).
dupRadar An R package which provides functions for plotting and analyzing the duplication rates dependent on the expression levels.
FastQC is a quality control tool for high-throughput sequence data (Babraham Institute) and is developed in Java. Import of data is possible from FastQ files, BAM or SAM format. This tool provides an overview to inform about problematic areas, summary graphs and tables to rapid assessment of data. Results are presented in HTML permanent reports. FastQC can be run as a stand-alone application or it can be integrated into a larger pipeline solution.
fastqp Simple FASTQ quality assessment using Python.
Kraken: A set of tools for quality control and analysis of high-throughput sequence data.
HTSeq The Python script htseq-qa takes a file with sequencing reads (either raw or aligned reads) and produces a PDF file with useful plots to assess the technical quality of a run.
mRIN - Assessing mRNA integrity directly from RNA-Seq data.
MultiQC - Aggregate and visualise results from numerous tools (FastQC, HTSeq, RSeQC, Tophat, STAR, others..) across all samples into a single report.
NGSQC: cross-platform quality analysis pipeline for deep sequencing data.
NGS QC Toolkit A toolkit for the quality control (QC) of next generation sequencing (NGS) data. The toolkit comprises user-friendly stand alone tools for quality control of the sequence data generated using Illumina and Roche 454 platforms with detailed results in the form of tables and graphs, and filtering of high-quality sequence data. It also includes few other tools, which are helpful in NGS data quality control and analysis.
PRINSEQ is a tool that generates summary statistics of sequence and quality data and that is used to filter, reformat and trim next-generation sequence data. It is particular designed for 454/Roche data, but can also be used for other types of sequence.
QC-Chain is a package of quality control tools for next generation sequencing (NGS) data, consisting of both raw reads quality evaluation and de novo contamination screening, which could identify all possible contamination sequences.
QC3 a quality control tool designed for DNA sequencing data for raw data, alignment, and variant calling.
qrqc Quickly scans reads and gathers statistics on base and quality frequencies, read length, and frequent sequences. Produces graphical output of statistics for use in quality control pipelines, and an optional HTML quality report. S4 SequenceSummary objects allow specific tests and functionality to be written around the data collected.
RNA-SeQC is a tool with application in experiment design, process optimization and quality control before computational analysis. Essentially, provides three types of quality control: read counts (such as duplicate reads, mapped reads and mapped unique reads, rRNA reads, transcript-annotated reads, strand specificity), coverage (like mean coverage, mean coefficient of variation, 5’/3’ coverage, gaps in coverage, GC bias) and expression correlation (the tool provides RPKM-based estimation of expression levels). RNA-SeQC is implemented in Java and is not required installation, however can be run using the GenePattern web interface. The input could be one or more BAM files. HTML reports are generated as output.
RSeQC analyzes diverse aspects of RNA-Seq experiments: sequence quality, sequencing depth, strand specificity, GC bias, read distribution over the genome structure and coverage uniformity. The input can be SAM, BAM, FASTA, BED files or Chromosome size file (two-column, plain text file). Visualization can be performed by genome browsers like UCSC, IGB and IGV. However, R scripts can also be used for visualization.
SAMStat identifies problems and reports several statistics at different phases of the process. This tool evaluates unmapped, poorly and accurately mapped sequences independently to infer possible causes of poor mapping.
SolexaQA calculates sequence quality statistics and creates visual representations of data quality for second-generation sequencing data. Originally developed for the Illumina system (historically known as “Solexa”), SolexaQA now also supports Ion Torrent and 454 data.
Trim galore is a wrapper script to automate quality and adapter trimming as well as quality control, with some added functionality to remove biased methylation positions for RRBS sequence files (for directional, non-directional (or paired-end) sequencing).
Improving the quality
Improvement of the RNA-Seq quality, correcting the bias is a complex subject. Each RNA-Seq protocol introduces specific type of bias, each step of the process (such as the sequencing technology used) is susceptible to generate some sort of noise or type of error. Furthermore, even the species under investigation and the biological context of the samples are able to influence the results and introduce some kind of bias.
Many sources of bias were already reported – GC content and PCR enrichment, rRNA depletion, errors produced during sequencing, priming of reverse transcription caused by random hexamers.
Different tools were developed to attempt to solve each of the detected errors.
Trimming and adapters removal
AlienTrimmer implements a very fast approach (based on k-mers) to trim low-quality base pairs and clip technical (alien) oligonucleotides from single- or paired-end sequencing reads in plain or gzip-compressed FASTQ files (for more details, see AlienTrimmer).
BBDuk multithreaded tool to trim adapters and filter or mask contaminants based on kmer-matching, allowing a hamming- or edit-distance, as well as degenerate bases. Also performs optimal quality-trimming and filtering, format conversion, contaminant concentration reporting, gc-filtering, length-filtering, entropy-filtering, chastity-filtering, and generates text histograms for most operations. Interconverts between fastq, fasta, sam, scarf, interleaved and 2-file paired, gzipped, bzipped, ASCII-33 and ASCII-64. Keeps pairs together. Open-source, written in pure Java; supports all platforms with no recompilation and no other dependencies.
clean_reads cleans NGS (Sanger, 454, Illumina and solid) reads. It can trim bad quality regions, adaptors, vectors, and regular expressions. It also filters out the reads that do not meet a minimum quality criteria based on the sequence length and the mean quality.
condetri is a method for content dependent read trimming for Illumina data using quality scores of each base individually. It is independent from sequencing coverage and user interaction. The main focus of the implementation is on usability and to incorporate read trimming in next-generation sequencing data processing and analysis pipelines. It can process single-end and paired-end sequencing data of arbitrary length.
cutadapt removes adapter sequences from next-generation sequencing data (Illumina, SOLiD and 454). It is used especially when the read length of the sequencing machine is longer than the sequenced molecule, like the microRNA case.
Deconseq Detect and remove contaminations from sequence data.
Erne-Filter is a short string alignment package whose goal is to provide an all-inclusive set of tools to handle short (NGS-like) reads. ERNE comprises ERNE-FILTER (read trimming and continamination filtering), ERNE-MAP (core alignment tool/algorithm), ERNE-BS5 (bisulfite treated reads aligner), and ERNE-PMAP/ERNE-PBS5 (distributed versions of the aligners).
FastqMcf Fastq-mcf attempts to: Detect & remove sequencing adapters and primers; Detect limited skewing at the ends of reads and clip; Detect poor quality at the ends of reads and clip; Detect Ns, and remove from ends; Remove reads with CASAVA 'Y' flag (purity filtering); Discard sequences that are too short after all of the above; Keep multiple mate-reads in sync while doing all of the above.
FASTX Toolkit is a set of command line tools to manipulate reads in files FASTA or FASTQ format. These commands make possible preprocess the files before mapping with tools like Bowtie. Some of the tasks allowed are: conversion from FASTQ to FASTA format, information about statistics of quality, removing sequencing adapters, filtering and cutting sequences based on quality or conversion DNA/RNA.
Flexbar performs removal of adapter sequences, trimming and filtering features.
FreClu improves overall alignment accuracy performing sequencing-error correction by trimming short reads, based on a clustering methodology.
htSeqTools is a Bioconductor package able to perform quality control, processing of data and visualization. htSeqTools makes possible visualize sample correlations, to remove over-amplification artifacts, to assess enrichment efficiency, to correct strand bias and visualize hits.
NxTrim Adapter trimming and virtual library creation routine for Illumina Nextera Mate Pair libraries.
PRINSEQ generates statistics of your sequence data for sequence length, GC content, quality scores, n-plicates, complexity, tag sequences, poly-A/T tails, odds ratios. Filter the data, reformat and trim sequences.
Sabre A barcode demultiplexing and trimming tool for FastQ files.
Scythe A 3'-end adapter contaminant trimmer.
SEECER is a sequencing error correction algorithm for RNA-seq data sets. It takes the raw read sequences produced by a next generation sequencing platform like machines from Illumina or Roche. SEECER removes mismatch and indel errors from the raw reads and significantly improves downstream analysis of the data. Especially if the RNA-Seq data is used to produce a de novo transcriptome assembly, running SEECER can have tremendous impact on the quality of the assembly.
Sickle A windowed adaptive trimming tool for FASTQ files using quality.
SnoWhite is a pipeline designed to flexibly and aggressively clean sequence reads (gDNA or cDNA) prior to assembly. It takes in and returns fastq or fasta formatted sequence files.
ShortRead is a package provided in the R (programming language) / BioConductor environments and allows input, manipulation, quality assessment and output of next-generation sequencing data. This tool makes possible manipulation of data, such as filter solutions to remove reads based on predefined criteria. ShortRead could be complemented with several Bioconductor packages to further analysis and visualization solutions (BioStrings, BSgenome, IRanges, and so on).
SortMeRNA is a program tool for filtering, mapping and OTU-picking NGS reads in metatranscriptomic and metagenomic data. The core algorithm is based on approximate seeds and allows for analyses of nucleotide sequences. The main application of SortMeRNA is filtering ribosomal RNA from metatranscriptomic data.
TagCleaner The TagCleaner tool can be used to automatically detect and efficiently remove tag sequences (e.g. WTA tags) from genomic and metagenomic datasets. It is easily configurable and provides a user-friendly interface.
Trimmomatic performs trimming for Illumina platforms and works with FASTQ reads (single or pair-ended). Some of the tasks executed are: cut adapters, cut bases in optional positions based on quality thresholds, cut reads to a specific length, converts quality scores to Phred-33/64.
fastp A tool designed to provide all-in-one preprocessing for FastQ files. This tool is developed in C++ with multithreading supported.
FASTX-Toolkit The FASTX-Toolkit is a collection of command line tools for Short-Reads FASTA/FASTQ files preprocessing.
Detection of chimeric reads
Recent sequencing technologies normally require DNA samples to be amplified via polymerase chain reaction (PCR). Amplification often generates chimeric elements (specially from ribosomal origin) - sequences formed from two or more original sequences joined.
UCHIME is an algorithm for detecting chimeric sequences.
ChimeraSlayer is a chimeric sequence detection utility, compatible with near-full length Sanger sequences and shorter 454-FLX sequences (~500 bp).
Error correction
High-throughput sequencing errors characterization and their eventual correction.
Acacia Error-corrector for pyrosequenced amplicon reads.
AllPathsLG error correction.
AmpliconNoise AmpliconNoise is a collection of programs for the removal of noise from 454 sequenced PCR amplicons. It involves two steps the removal of noise from the sequencing itself and the removal of PCR point errors. This project also includes the Perseus algorithm for chimera removal.
BayesHammer. Bayesian clustering for error correction. This algorithm is based on Hamming graphs and Bayesian subclustering. While BAYES HAMMER was designed for single-cell sequencing, it also improves on existing error correction tools for bulk sequencing data.
Bless A bloom filter-based error correction solution for high-throughput sequencing reads.
Blue Blue is a short-read error-correction tool based on k-mer consensus and context.
BFC A sequencing error corrector designed for Illumina short reads. It uses a non-greedy algorithm with a speed comparable to implementations based on greedy methods.
Denoiser Denoiser is designed to address issues of noise in pyrosequencing data. Denoiser is a heuristic variant of PyroNoise. Developers of denoiser report a good agreement with PyroNoise on several test datasets.
Echo A reference-free short-read error correction algorithm.
Lighter. A sequencing error correction without counting.
LSC LSC uses short Illumina reads to corrected errors in long reads.
Karect Karect: accurate correction of substitution, insertion and deletion errors for next-generation sequencing data.
NoDe NoDe: an error-correction algorithm for pyrosequencing amplicon reads.
PyroTagger PyroTagger: A fast, accurate pipeline for analysis of rRNA amplicon pyrosequence data.
Quake is a tool to correct substitution sequencing errors in experiments with deep coverage for Illumina sequencing reads.
QuorUM: An Error Corrector for Illumina Reads.
Rcorrector. Error correction for Illumina RNA-seq reads.
Reptile is a software developed in C++ for correcting sequencing errors in short reads from next-gen sequencing platforms.
Seecer SEquencing Error CorrEction for Rna reads.
SGA
SOAPdenovo
UNOISE
Bias correction
Alpine Modeling and correcting fragment sequence bias for RNA-seq.
cqn is a normalization tool for RNA-Seq data, implementing the conditional quantile normalization method.
EDASeq is a Bioconductor package to perform GC-Content Normalization for RNA-Seq Data.
GeneScissors A comprehensive approach to detecting and correcting spurious transcriptome inference due to RNAseq reads misalignment.
Peer is a collection of Bayesian approaches to infer hidden determinants and their effects from gene expression profiles using factor analysis methods. Applications of PEER have: a) detected batch effects and experimental confounders, b) increased the number of expression QTL findings by threefold, c) allowed inference of intermediate cellular traits, such as transcription factor or pathway activations.
RUV is a R package that implements the remove unwanted variation (RUV) methods of Risso et al. (2014) for the normalization of RNA-Seq read counts between samples.
svaSurrogate Variable Analysis.
svaseq removing batch effects and other unwanted noise from sequencing data.
SysCall is a classifier tool to identification and correction of systematic error in high-throughput sequence data.
Other tasks/pre-processing data
Further tasks performed before alignment, namely paired-read mergers.
AuPairWise A Method to Estimate RNA-Seq Replicability through Co-expression.
BamHash is a checksum based method to ensure that the read pairs in FASTQ files match exactly the read pairs stored in BAM files, regardless of the ordering of reads. BamHash can be used to verify the integrity of the files stored and discover any discrepancies. Thus, BamHash can be used to determine if it is safe to delete the FASTQ files storing raw sequencing reads after alignment, without the loss of data.
BBMerge Merges paired reads based on overlap to create longer reads, and an insert-size histogram. Fast, multithreaded, and yields extremely few false positives. Open-source, written in pure Java; supports all platforms with no recompilation and no other dependencies. Distributed with BBMap.
Biopieces are a collection of bioinformatics tools that can be pieced together in a very easy and flexible manner to perform both simple and complex tasks. The Biopieces work on a data stream in such a way that the data stream can be passed through several different Biopieces, each performing one specific task: modifying or adding records to the data stream, creating plots, or uploading data to databases and web services.
COPE COPE: an accurate k-mer-based pair-end reads connection tool to facilitate genome assembly.
DeconRNASeq is an R package for deconvolution of heterogeneous tissues based on mRNA-Seq data.
FastQ Screen screens FASTQ format sequences to a set of databases to confirm that the sequences contain what is expected (such as species content, adapters, vectors, etc.).
FLASH is a read pre-processing tool. FLASH combines paired-end reads which overlap and converts them to single long reads.
IDCheck
ORNA and ORNA Q/K A tool for reducing redundancy in RNA-seq data which reduces the computational resource requirements of an assembler
PANDASeq.is a program to align Illumina reads, optionally with PCR primers embedded in the sequence, and reconstruct an overlapping sequence.
PEAR PEAR: Illumina Paired-End reAd mergeR.
qRNASeq script The qRNAseq tool can be used to accurately eliminate PCR duplicates from RNA-Seq data if Molecular Indexes™ or other stochastic labels have been used during library prep.
SHERA a SHortread Error-Reducing Aligner.
XORRO Rapid Paired-End Read Overlapper.
DecontaMiner detects contamination in RNA-Seq data.
Alignment tools
After quality control, the first step of RNA-Seq analysis involves alignment of the sequenced reads to a reference genome (if available) or to a transcriptome database. See also List of sequence alignment software.
Short (unspliced) aligners
Short aligners are able to align continuous reads (not containing gaps result of splicing) to a genome of reference. Basically, there are two types: 1) based on the Burrows–Wheeler transform method such as Bowtie and BWA, and 2) based on Seed-extend methods, Needleman–Wunsch or Smith–Waterman algorithms. The first group (Bowtie and BWA) is many times faster, however some tools of the second group tend to be more sensitive, generating more correctly aligned reads.
BFAST aligns short reads to reference sequences and presents particular sensitivity towards errors, SNPs, insertions and deletions. BFAST works with the Smith–Waterman algorithm.
Bowtie is a short aligner using an algorithm based on the Burrows–Wheeler transform and the FM-index. Bowtie tolerates a small number of mismatches.
Bowtie2 Bowtie 2 is a memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly recommended for aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes. Bowtie 2 indexes the genome with an FM-index to keep its memory footprint small: for the human genome, its memory footprint is typically around 3.2 GB. Bowtie 2 supports gapped, local, and paired-end alignment modes.
Burrows–Wheeler Aligner (BWA) BWA is a software package for mapping low-divergent sequences against a large reference genome, such as the human genome. It consists of three algorithms: BWA-backtrack, BWA-SW and BWA-MEM. The first algorithm is designed for Illumina sequence reads up to 100bp, while the rest two for longer sequences ranged from 70bp to 1Mbp. BWA-MEM and BWA-SW share similar features such as long-read support and split alignment, but BWA-MEM, which is the latest, is generally recommended for high-quality queries as it is faster and more accurate. BWA-MEM also has better performance than BWA-backtrack for 70-100bp Illumina reads.
Short Oligonucleotide Analysis Package (SOAP)
GNUMAP performs alignment using a probabilistic Needleman–Wunsch algorithm. This tool is able to handle alignment in repetitive regions of a genome without losing information. The output of the program was developed to make possible easy visualization using available software.
Maq first aligns reads to reference sequences and after performs a consensus stage. On the first stage performs only ungapped alignment and tolerates up to 3 mismatches.
Mosaik Mosaik is able to align reads containing short gaps using Smith–Waterman algorithm, ideal to overcome SNPs, insertions and deletions.
NovoAlign (commercial) is a short aligner to the Illumina platform based on Needleman–Wunsch algorithm. It is able to deal with bisulphite data. Output in SAM format.
PerM is a software package which was designed to perform highly efficient genome scale alignments for hundreds of millions of short reads produced by the ABI SOLiD and Illumina sequencing platforms. PerM is capable of providing full sensitivity for alignments within 4 mismatches for 50bp SOLID reads and 9 mismatches for 100bp Illumina reads.
RazerS
SEAL uses a MapReduce model to produce distributed computing on clusters of computers. Seal uses BWA to perform alignment and Picard MarkDuplicates to detection and duplicate read removal.
segemehl
SeqMap
SHRiMP employs two techniques to align short reads. Firstly, the q-gram filtering technique based on multiple seeds identifies candidate regions. Secondly, these regions are investigated in detail using Smith–Waterman algorithm.
SMALT
Stampy combines the sensitivity of hash tables and the speed of BWA. Stampy is prepared to alignment of reads containing sequence variation like insertions and deletions. It is able to deal with reads up to 4500 bases and presents the output in SAM format.
Subread is a read aligner. It uses the seed-and-vote mapping paradigm to determine the mapping location of the read by using its largest mappable region. It automatically decides whether the read should be globally mapped or locally mapped. For RNA-seq data, Subread should be used for the purpose of expression analysis. Subread can also be used to map DNA-seq reads.
ZOOM (commercial) is a short aligner of the Illumina/Solexa 1G platform. ZOOM uses extended spaced seeds methodology building hash tables for the reads, and tolerates mismatches and insertions and deletions.
WHAM WHAM is a high-throughput sequence alignment tool developed at University of Wisconsin-Madison. It aligns short DNA sequences (reads) to the whole human genome at a rate of over 1500 million 60bit/s reads per hour, which is one to two orders of magnitudes faster than the leading state-of-the-art techniques.
Spliced aligners
Many reads span exon-exon junctions and can not be aligned directly by Short aligners, thus specific aligners were necessary - Spliced aligners. Some Spliced aligners employ Short aligners to align firstly unspliced/continuous reads (exon-first approach), and after follow a different strategy to align the rest containing spliced regions - normally the reads are split into smaller segments and mapped independently. See also.
Aligners based on known splice junctions (annotation-guided aligners)
In this case the detection of splice junctions is based on data available in databases about known junctions. This type of tools cannot identify new splice junctions. Some of this data comes from other expression methods like expressed sequence tags (EST).
Erange is a tool to alignment and data quantification to mammalian transcriptomes.
IsoformEx
MapAL
OSA
RNA-MATE is a computational pipeline for alignment of data from Applied Biosystems SOLID system. Provides the possibility of quality control and trimming of reads. The genome alignments are performed using mapreads and the splice junctions are identified based on a library of known exon-junction sequences. This tool allows visualization of alignments and tag counting.
RUM performs alignment based on a pipeline, being able to manipulate reads with splice junctions, using Bowtie and Blat. The flowchart starts doing alignment against a genome and a transcriptome database executed by Bowtie. The next step is to perform alignment of unmapped sequences to the genome of reference using BLAT. In the final step all alignments are merged to get the final alignment. The input files can be in FASTA or FASTQ format. The output is presented in RUM and SAM format.
RNASEQR.
SAMMate
SpliceSeq
X-Mate
De novo splice aligners
De novo Splice aligners allow the detection of new Splice junctions without need to previous annotated information (some of these tools present annotation as a suplementar option).
ABMapper
BBMap Uses short kmers to align reads directly to the genome (spanning introns to find novel isoforms) or transcriptome. Highly tolerant of substitution errors and indels, and very fast. Supports output of all SAM tags needed by Cufflinks. No limit to genome size or number of splices per read. Supports Illumina, 454, Sanger, Ion Torrent, PacBio, and Oxford Nanopore reads, paired or single-ended. Does not use any splice-site-finding heuristics optimized for a single taxonomic branch, but rather finds optimally-scoring multi-affine-transform global alignments, and thus is ideal for studying new organisms with no annotation and unknown splice motifs. Open-source, written in pure Java; supports all platforms with no recompilation and no other dependencies.
ContextMap was developed to overcome some limitations of other mapping approaches, such as resolution of ambiguities. The central idea of this tool is to consider reads in gene expression context, improving this way alignment accuracy. ContextMap can be used as a stand-alone program and supported by mappers producing a SAM file in the output (e.g.: TopHat or MapSplice). In stand-alone mode aligns reads to a genome, to a transcriptome database or both.
CRAC propose a novel way of analyzing reads that integrates genomic locations and local coverage, and detect candidate mutations, indels, splice or fusion junctions in each single read. Importantly, CRAC improves its predictive performance when supplied with e.g. 200 nt reads and should fit future needs of read analyses.
GSNAP
GMAP A Genomic Mapping and Alignment Program for mRNA and EST Sequences.
HISAT is a spliced alignment program for mapping RNA-seq reads. In addition to one global FM-index that represents a whole genome, HISAT uses a large set of small FM-indexes that collectively cover the whole genome (each index represents a genomic region of ~64,000 bp and ~48,000 indexes are needed to cover the human genome). These small indexes (called local indexes) combined with several alignment strategies enable effective alignment of RNA-seq reads, in particular, reads spanning multiple exons. The memory footprint of HISAT is relatively low (~4.3GB for the human genome). We have developed HISAT based on the Bowtie2 implementation to handle most of the operations on the FM-index.
HISAT2 is an alignment program for mapping next-generation sequencing reads (both DNA and RNA) to a population of human genomes (as well as to a single reference genome). Based on an extension of BWT for graphs [Sirén et al. 2014], we designed and implemented a graph FM-index (GFM), an original approach and its first implementation to the best of our knowledge. In addition to using one global GFM index that represents a population of human genomes, HISAT2 uses a large set of small GFM indexes that collectively cover the whole genome (each index representing a genomic region of 56 Kbp, with 55,000 indexes needed to cover the human population). These small indexes (called local indexes), combined with several alignment strategies, enable rapid and accurate alignment of sequencing reads. This new indexing scheme is called a Hierarchical Graph FM index (HGFM).
HMMSplicer can identify canonical and non-canonical splice junctions in short-reads. Firstly, unspliced reads are removed with Bowtie. After that, the remaining reads are one at a time divided in half, then each part is seeded against a genome and the exon borders are determined based on the Hidden Markov Model. A quality score is assigned to each junction, useful to detect false positive rates.
MapSplice
PALMapper
Pass aligns gapped, ungapped reads and also bisulfite sequencing data. It includes the possibility to filter data before alignment (remotion of adapters). Pass uses Needleman–Wunsch and Smith–Waterman algorithms, and performs alignment in 3 stages: scanning positions of seed sequences in the genome, testing the contiguous regions and finally refining the alignment.
PASSion
PASTA
QPALMA predicts splice junctions supported on machine learning algorithms. In this case the training set is a set of spliced reads with quality information and already known alignments.
RASER: reads aligner for SNPs and editing sites of RNA.
SeqSaw
SoapSplice A tool for genome-wide ab initio detection of splice junction sites from RNA-Seq, a method using new generation sequencing technologies to sequence the messenger RNA.
SpliceMap
SplitSeek
SuperSplat was developed to find all type of splice junctions. The algorithm splits each read in all possible two-chunk combinations in an iterative way, and alignment is tried to each chunck. Output in "Supersplat" format.
De novo splice aligners that also use annotation optionally
MapNext
OLego
STAR is a tool that employs "sequential maximum mappable seed search in uncompressed suffix arrays followed by seed clustering and stitching procedure", detects canonical, non-canonical splices junctions and chimeric-fusion sequences. It is already adapted to align long reads (third-generation sequencing technologies) and can reach speeds of 45 million paired reads per hour per processor.
Subjunc is a specialized version of Subread. It uses all mappable regions in an RNA-seq read to discover exons and exon-exon junctions. It uses the donor/receptor signals to find the exact splicing locations. Subjunc yields full alignments for every RNA-seq read including exon-spanning reads, in addition to the discovered exon-exon junctions. Subjunc should be used for the purpose of junction detection and genomic variation detection in RNA-seq data.
TopHat is prepared to find de novo junctions. TopHat aligns reads in two steps. Firstly, unspliced reads are aligned with Bowtie. After, the aligned reads are assembled with Maq resulting islands of sequences. Secondly, the splice junctions are determined based on the initially unmapped reads and the possible canonical donor and acceptor sites within the island sequences.
Other spliced aligners
G.Mo.R-Se is a method that uses RNA-Seq reads to build de novo gene models.
Evaluation of alignment tools
AlignerBoost is a generalized software toolkit for boosting Next-Gen sequencing mapping precision using a Bayesian-based mapping quality framework.
CADBURE Bioinformatics tool for evaluating aligner performance on your RNA-Seq dataset.
QualiMap: Evaluating next generation sequencing alignment data.
RNAseqEVAL A collection of tools for evaluating RNA seq mapping.
Teaser: Individualized benchmarking and optimization of read mapping results for NGS data.
Normalization, quantitative analysis and differential expression
General tools
These tools perform normalization and calculate the abundance of each gene expressed in a sample. RPKM, FPKM and TPMs are some of the units employed to quantification of expression.
Some software are also designed to study the variability of genetic expression between samples (differential expression). Quantitative and differential studies are largely determined by the quality of reads alignment and accuracy of isoforms reconstruction. Several studies are available comparing differential expression methods.
ABSSeq a new RNA-Seq analysis method based on modelling absolute expression differences.
ALDEx2 is a tool for comparative analysis of high-throughput sequencing data. ALDEx2 uses compositional data analysis and can be applied to RNAseq, 16S rRNA gene sequencing, metagenomic sequencing, and selective growth experiments.
Alexa-Seq is a pipeline that makes possible to perform gene expression analysis, transcript specific expression analysis, exon junction expression and quantitative alternative analysis. Allows wide alternative expression visualization, statistics and graphs.
ARH-seq – identification of differential splicing in RNA-seq data.
ASC
Ballgown
BaySeq is a Bioconductor package to identify differential expression using next-generation sequencing data, via empirical Bayesian methods. There is an option of using the "snow" package for parallelisation of computer data processing, recommended when dealing with large data sets.
GMNB is a Bayesian method to temporal gene differential expression analysis across different phenotypes or treatment conditions that naturally handles the heterogeneity of sequencing depth in different samples, removing the need for ad-hoc normalization.
BBSeq
BitSeq (Bayesian Inference of Transcripts from Sequencing Data) is an application for inferring expression levels of individual transcripts from sequencing (RNA-Seq) data and estimating differential expression (DE) between conditions.
CEDER Accurate detection of differentially expressed genes by combining significance of exons using RNA-Seq.
CPTRA The CPTRA package is for analyzing transcriptome sequencing data from different sequencing platforms. It combines advantages of 454, Illumina GAII, or other platforms and can perform sequence tag alignment and annotation, expression quantification tasks.
casper is a Bioconductor package to quantify expression at the isoform level. It combines using informative data summaries, flexible estimation of experimental biases and statistical precision considerations which (reportedly) provide substantial reductions in estimation error.
Cufflinks/Cuffdiff is appropriate to measure global de novo transcript isoform expression. It performs assembly of transcripts, estimation of abundances and determines differential expression (Cuffdiff) and regulation in RNA-Seq samples.
DESeq is a Bioconductor package to perform differential gene expression analysis based on negative binomial distribution.
DEGSeq
Derfinder Annotation-agnostic differential expression analysis of RNA-seq data at base-pair resolution via the DER Finder approach.
DEvis is a powerful, integrated solution for the analysis of differential expression data. Using DESeq2 as a framework, DEvis provides a wide variety of tools for data manipulation, visualization, and project management.
DEXSeq is Bioconductor package that finds differential differential exon usage based on RNA-Seq exon counts between samples. DEXSeq employs negative binomial distribution, provides options to visualization and exploration of the results.
DEXUS is a Bioconductor package that identifies differentially expressed genes in RNA-Seq data under all possible study designs such as studies without replicates, without sample groups, and with unknown conditions. In contrast to other methods, DEXUS does not need replicates to detect differentially expressed transcripts, since the replicates (or conditions) are estimated by the EM method for each transcript.
DGEclust is a Python package for clustering expression data from RNA-seq, CAGE and other NGS assays using a Hierarchical Dirichlet Process Mixture Model. The estimated cluster configurations can be post-processed in order to identify differentially expressed genes and for generating gene- and sample-wise dendrograms and heatmaps.
DiffSplice is a method for differential expression detection and visualization, not dependent on gene annotations. This method is supported on identification of alternative splicing modules (ASMs) that diverge in the different isoforms. A non-parametric test is applied to each ASM to identify significant differential transcription with a measured false discovery rate.
EBSeq is a Bioconductor package for identifying genes and isoforms differentially expressed (DE) across two or more biological conditions in an RNA-seq experiment. It also can be used to identify DE contigs after performing de novo transcriptome assembly. While performing DE analysis on isoforms or contigs, different isoform/contig groups have varying estimation uncertainties. EBSeq models the varying uncertainties using an empirical Bayes model with different priors.
EdgeR is a R package for analysis of differential expression of data from DNA sequencing methods, like RNA-Seq, SAGE or ChIP-Seq data. edgeR employs statistical methods supported on negative binomial distribution as a model for count variability.
EdgeRun an R package for sensitive, functionally relevant differential expression discovery using an unconditional exact test.
EQP The exon quantification pipeline (EQP): a comprehensive approach to the quantification of gene, exon and junction expression from RNA-seq data.
ESAT The End Sequence Analysis Toolkit (ESAT) is specially designed to be applied for quantification of annotation of specialized RNA-Seq gene libraries that target the 5' or 3' ends of transcripts.
eXpress performance includes transcript-level RNA-Seq quantification, allele-specific and haplotype analysis and can estimate transcript abundances of the multiple isoforms present in a gene. Although could be coupled directly with aligners (like Bowtie), eXpress can also be used with de novo assemblers and thus is not needed a reference genome to perform alignment. It runs on Linux, Mac and Windows.
ERANGE performs alignment, normalization and quantification of expressed genes.
featureCounts an efficient general-purpose read quantifier.
FDM
FineSplice Enhanced splice junction detection and estimation from RNA-Seq data.
GFOLD Generalized fold change for ranking differentially expressed genes from RNA-seq data.
globalSeq Global test for counts: testing for association between RNA-Seq and high-dimensional data.
GPSeq This is a software tool to analyze RNA-seq data to estimate gene and exon expression, identify differentially expressed genes, and differentially spliced exons.
IsoDOT – Differential RNA-isoform Expression.
Limma Limma powers differential expression analyses for RNA-sequencing and microarray studies.
LPEseq accurately test differential expression with a limited number of replicates.
Kallisto "Kallisto is a program for quantifying abundances of transcripts from RNA-Seq data, or more generally of target sequences using high-throughput sequencing reads. It is based on the novel idea of pseudoalignment for rapidly determining the compatibility of reads with targets, without the need for alignment. On benchmarks with standard RNA-Seq data, kallisto can quantify 30 million human reads in less than 3 minutes on a Mac desktop computer using only the read sequences and a transcriptome index that itself takes less than 10 minutes to build."
MATS Multivariate Analysis of Transcript Splicing (MATS).
MAPTest provides a general testing framework for differential expression analysis of RNA-Seq time course experiment. Method of the pack is based on latent negative-binomial Gaussian mixture model. The proposed test is optimal in the maximum average power. The test allows not only identification of traditional DE genes but also testing of a variety of composite hypotheses of biological interest.
MetaDiff Differential isoform expression analysis using random-effects meta-regression.
metaseqR is a Bioconductor package that detects differentially expressed genes from RNA-Seq data by combining six statistical algorithms using weights estimated from their performance with simulated data estimated from real data, either public or user-based. In this way, metaseqR optimizes the tradeoff between precision and sensitivity. In addition, metaseqR creates a detailed and interactive report with a variety of diagnostic and exploration plots and auto-generated text.
MMSEQ is a pipeline for estimating isoform expression and allelic imbalance in diploid organisms based on RNA-Seq. The pipeline employs tools like Bowtie, TopHat, ArrayExpressHTS and SAMtools. Also, edgeR or DESeq to perform differential expression.
MultiDE
Myrna is a pipeline tool that runs in a cloud environment (Elastic MapReduce) or in a unique computer for estimating differential gene expression in RNA-Seq datasets. Bowtie is employed for short read alignment and R algorithms for interval calculations, normalization, and statistical processing.
NEUMA is a tool to estimate RNA abundances using length normalization, based on uniquely aligned reads and mRNA isoform models. NEUMA uses known transcriptome data available in databases like RefSeq.
NOISeq NOISeq is a non-parametric approach for the identification of differentially expressed genes from count data or previously normalized count data. NOISeq empirically models the noise distribution of count changes by contrasting fold-change differences (M) and absolute expression differences (D) for all the features in samples within the same condition.
NPEBseq is a nonparametric empirical Bayesian-based method for differential expression analysis.
NSMAP allows inference of isoforms as well estimation of expression levels, without annotated information. The exons are aligned and splice junctions are identified using TopHat. All the possible isoforms are computed by a combination of the detected exons.
NURD an implementation of a new method to estimate isoform expression from non-uniform RNA-seq data.
PANDORA An R package for the analysis and result reporting of RNA-Seq data by combining multiple statistical algorithms.
PennSeq PennSeq: accurate isoform-specific gene expression quantification in RNA-Seq by modeling non-uniform read distribution.
Quark Quark enables semi-reference-based compression of RNA-seq data.
QuasR Quantify and Annotate Short Reads in R.
RapMap A Rapid, Sensitive and Accurate Tool for Mapping RNA-seq Reads to Transcriptomes.
RNAeXpress Can be run with Java GUI or command line on Mac, Windows, and Linux. It can be configured to perform read counting, feature detection or GTF comparison on mapped rnaseq data.
Rcount Rcount: simple and flexible RNA-Seq read counting.
rDiff is a tool that can detect differential RNA processing (e.g. alternative splicing, polyadenylation or ribosome occupancy).
RNASeqPower Calculating samples Size estimates for RNA Seq studies. R package version.
RNA-Skim RNA-Skim: a rapid method for RNA-Seq quantification at transcript-level.
rSeq rSeq is a set of tools for RNA-Seq data analysis. It consists of programs that deal with many aspects of RNA-Seq data analysis, such as read quality assessment, reference sequence generation, sequence mapping, gene and isoform expressions (RPKMs) estimation, etc.
RSEM
rQuant is a web service (Galaxy (computational biology) installation) that determines abundances of transcripts per gene locus, based on quadratic programming. rQuant is able to evaluate biases introduced by experimental conditions. A combination of tools is employed: PALMapper (reads alignment), mTiM and mGene (inference of new transcripts).
Salmon is a software tool for computing transcript abundance from RNA-seq data using either an alignment-free (based directly on the raw reads) or an alignment-based (based on pre-computed alignments) approach. It uses an online stochastic optimization approach to maximize the likelihood of the transcript abundances under the observed data. The software itself is capable of making use of many threads to produce accurate quantification estimates quickly. It is part of the Sailfish suite of software, and is the successor to the Sailfish tool.
SAJR is a java-written read counter and R-package for differential splicing analysis. It uses junction reads to estimate exon exclusion and reads mapped within exon to estimate its inclusion. SAJR models it by GLM with quasibinomial distribution and uses log likelihood test to assess significance.
Scotty Performs power analysis to estimate the number of replicates and depth of sequencing required to call differential expression.
Seal alignment-free algorithm to quantify sequence expression by matching kmers between raw reads and a reference transcriptome. Handles paired reads and alternate isoforms, and uses little memory. Accepts all common read formats, and outputs read counts, coverage, and FPKM values per reference sequence. Open-source, written in pure Java; supports all platforms with no recompilation and no other dependencies. Distributed with BBMap. (Seal - Sequence Expression AnaLyzer - is unrelated to the SEAL distributed short-read aligner.)
semisup Semi-supervised mixture model: detecting SNPs with interactive effects on a quantitative trait
Sleuth is a program for analysis of RNA-Seq experiments for which transcript abundances have been quantified with kallisto.
SplicingCompass differential splicing detection using RNA-Seq data.
sSeq The purpose of this R package is to discover the genes that are differentially expressed between two conditions in RNA-seq experiments.
StringTie is an assembler of RNA-Seq alignments into potential transcripts. It uses a novel network flow algorithm as well as an optional de novo assembly step to assemble and quantitate full-length transcripts representing multiple splice variants for each gene locus. It was designed as a successor to Cufflinks (its developers include some of the Cufflinks developers) and has many of the same features.
TIGAR Transcript isoform abundance estimation method with gapped alignment of RNA-Seq data by variational Bayesian inference.
TimeSeq Detecting Differentially Expressed Genes in Time Course RNA-Seq Data.
TPMCalculator one-step software to quantify mRNA abundance of genomic features.
WemIQ is a software tool to quantify isoform expression and exon splicing ratios from RNA-seq data accurately and robustly.
Evaluation of quantification and differential expression
CompcodeR RNAseq data simulation, differential expression analysis and performance comparison of differential expression methods.
DEAR-O Differential Expression Analysis based on RNA-seq data – Online.
PROPER comprehensive power evaluation for differential expression using RNA-seq.
RNAontheBENCH computational and empirical resources for benchmarking RNAseq quantification and differential expression methods.
rnaseqcomp Several quantitative and visualized benchmarks for RNA-seq quantification pipelines. Two-condition quantifications for genes, transcripts, junctions or exons by each pipeline with nessasery meta information should be organizd into numeric matrices in order to proceed the evaluation.
Multi-tool solutions
DEB is a web-interface/pipeline that permits to compare results of significantly expressed genes from different tools. Currently are available three algorithms: edgeR, DESeq and bayseq.
SARTools A DESeq2- and EdgeR-Based R Pipeline for Comprehensive Differential Analysis of RNA-Seq Data.
Transposable Element expression
TeXP is a Transposable Element quantification pipeline that deconvolves pervasive transcription from autonomous transcription of LINE-1 elements.
Workbench (analysis pipeline / integrated solutions)
Commercial solutions
ActiveSite by Cofactor Genomics
Avadis NGS (currently Strand NGS)
BaseSpace by Illumina
Biowardrobe an integrated platform for analysis of epigenomics and transcriptomics data.
BBrowser a platform for analyzing public and in-house single-cell transcriptomics data
CLC Genomics Workbench
DNASTAR
ERGO
Genedata
GeneSpring GX
Genevestigator by Nebion (basic version is for free for academic researchers).
geospiza
Golden Helix
Maverix Biomics
NextGENe
OmicsOffice
Partek Flow Comprehensive single cell analysis within an intuitive interface.
Qlucore. Easy to use for analysis and visualization. One button import of BAM files.
Open (free) source solutions
ArrayExpressHTS is a BioConductor package that allows preprocessing, quality assessment and estimation of expression of RNA-Seq datasets. It can be run remotely at the European Bioinformatics Institute cloud or locally. The package makes use of several tools: ShortRead (quality control), Bowtie, TopHat or BWA (alignment to a reference genome), SAMtools format, Cufflinks or MMSEQ (expression estimation).
BioJupies is a web-based platform that provides complete RNA-seq analysis solution from free alignment service to a complete data analysis report delivered as an interactive Jupyter Notebook.
BioQueue is a web-based queue engine designed preferentially to improve the efficiency and robustness of job execution in bioinformatics research by estimating the system resources required by a certain job. At the same time, BioQueue also aims to promote the accessibility and reproducibility of data analysis in biomedical research. Implemented by Python 2.7, BioQueue can work in both POSIX compatible systems (Linux, Solaris, OS X, etc.) and Windows. See also.
BioWardrobe is an integrated package that for analysis of ChIP-Seq and RNA-Seq datasets using a web-based user-friendly GUI. For RNA-Seq Biowardrobe performs mapping, quality control, RPKM estimation and differential expression analysis between samples (groups of samples). Results of differential expression analysis can be integrated with ChIP-Seq data to build average tag density profiles and heat maps. The package makes use of several tools open source tools including STAR and DESeq. See also.
Chipster is a user-friendly analysis software for high-throughput data. It contains over 350 analysis tools for next generation sequencing (NGS), microarray, proteomics and sequence data. Users can save and share automatic analysis workflows, and visualize data interactively using a built-in genome browser and many other visualizations.
DEWE (Differential Expression Workflow Executor) is an open source desktop application that provides a user-friendly GUI for easily executing Differential Expression analyses in RNA-Seq data. Currently, DEWE provides two differential expression analysis workflows: HISAT2, StringTie and Ballgown and Bowtie2, StringTie and R libraries (Ballgown and edgeR). It runs in Linux, Windows and Mac OS X.
easyRNASeq Calculates the coverage of high-throughput short-reads against a genome of reference and summarizes it per feature of interest (e.g. exon, gene, transcript). The data can be normalized as 'RPKM' or by the 'DESeq' or 'edgeR' package.
ExpressionPlot
FASTGenomics is an online platform to share single-cell RNA sequencing data and analyses using reproducible workflows. Gene expression data can be shared meeting European data protection standards (GDPR). FASTGenomics enables the user to upload their own data and generate customized and reproducible workflows for the exploration and analysis of gene expression data (Scholz et al. 2018).
FX FX is a user-Friendly RNA-Seq gene eXpression analysis tool, empowered by the concept of cloud-computing. With FX, you can simply upload your RNA-Seq raw FASTQ data on the cloud, and let the computing infra to do the heavy analysis.
Galaxy: Galaxy is a general purpose workbench platform for computational biology.
GENE-Counter is a Perl pipeline for RNA-Seq differential gene expression analyses. Gene-counter performs alignments with CASHX, Bowtie, BWA or other SAM output aligner. Differential gene expression is run with three optional packages (NBPSeq, edgeR and DESeq) using negative binomial distribution methods. Results are stored in a MySQL database to make possible additional analyses.
GenePattern offers integrated solutions to RNA-Seq analysis (Broad Institute).
GeneProf Freely accessible, easy to use analysis pipelines for RNA-seq and ChIP-seq experiments.
GREIN is an interactive web platform for re-processing and re-analyzing GEO RNA-seq data. GREIN is powered by the back-end computational pipeline for uniform processing of RNA-seq data and the large number (>5,800) of already processed data sets. The front-end user friendly interfaces provide a wealth of user-analytics options including sub-setting and downloading processed data, interactive visualization, statistical power analyses, construction of differential gene expression signatures and their comprehensive functional characterization, connectivity analysis with LINCS L1000 data, etc.
GT-FAR is an RNA seq pipeline that performs RNA-seq QC, alignment, reference free quantification, and splice variant calling. It filters, trims, and sequentially aligns reads to gene models and predicts and validates new splice junctions after which it quantifies expression for each gene, exon, and known/novel splice junction, and Variant Calling.
MultiExperiment Viewer (MeV) is suitable to perform analysis, data mining and visualization of large-scale genomic data. The MeV modules include a variety of algorithms to execute tasks like Clustering and Classification, Student's t-test, Gene Set Enrichment Analysis or Significance Analysis. MeV runs on Java.
NGSUtils is a suite of software tools for working with next-generation sequencing datasets.
Rail-RNA Scalable analysis of RNA-seq splicing and coverage.
RAP RNA-Seq Analysis Pipeline, a new cloud-based NGS web application.
RSEQtools "RSEQtools consists of a set of modules that perform common tasks such as calculating gene expression values, generating signal tracks of mapped reads, and segmenting that signal into actively transcribed regions. In addition to the anonymization afforded by this format it also facilitates the decoupling of the alignment of reads from downstream analyses."
RobiNA provides a user graphical interface to deal with R/BioConductor packages. RobiNA provides a package that automatically installs all required external tools (R/Bioconductor frameworks and Bowtie). This tool offers a diversity of quality control methods and the possibility to produce many tables and plots supplying detailed results for differential expression. Furthermore, the results can be visualized and manipulated with MapMan and PageMan. RobiNA runs on Java version 6.
RseqFlow is an RNA-Seq analysis pipeline which offers an express implementation of analysis steps for RNA sequencing datasets. It can perform pre and post mapping quality control (QC) for sequencing data, calculate expression levels for uniquely mapped reads, identify differentially expressed genes, and convert file formats for ease of visualization.
S-MART handles mapped RNA-Seq data, and performs essentially data manipulation (selection/exclusion of reads, clustering and differential expression analysis) and visualization (read information, distribution, comparison with epigenomic ChIP-Seq data). It can be run on any laptop by a person without computer background. A friendly graphical user interface makes easy the operation of the tools.
Taverna is an open source and domain-independent Workflow Management System – a suite of tools used to design and execute scientific workflows and aid in silico experimentation.
TCW is a Transcriptome Computational Workbench.
TRAPLINE a standardized and automated pipeline for RNA sequencing data analysis, evaluation and annotation.
ViennaNGS A toolbox for building efficient next- generation sequencing analysis pipelines.
wapRNA This is a free web-based application for the processing of high-throughput RNA-Seq data (wapRNA) from next generation sequencing (NGS) platforms, such as Genome Analyzer of Illumina Inc. (Solexa) and SOLiD of Applied Biosystems (SOLiD). wapRNA provides an integrated tool for RNA sequence, refers to the use of High-throughput sequencing technologies to sequence cDNAs in order to get information about a sample's RNA content.
Alternative splicing analysis
General tools
Alternative Splicing Analysis Tool Package(ASATP) Alternative splicing analysis tool package (ASATP) includes a series of toolkits to analyze alternative splicing events, which could be used to detect and visualized alternative splicing events, check ORF changes, assess regulations of alternative splicing and do statistical analysis.
Asprofile is a suite of programs for extracting, quantifying and comparing alternative splicing (AS) events from RNA-seq data.
AStalavista The AStalavista web server extracts and displays alternative splicing (AS) events from a given genomic annotation of exon-intron gene coordinates. By comparing all given transcripts, AStalavista detects the variations in their splicing structure and identify all AS events (like exon skipping, alternate donor, etc.) by assigning to each of them an AS code.
CLASS2 accurate and efficient splice variant annotation from RNA-seq reads.
Cufflinks/Cuffdiff
DEXseq Inference of differential exon usage in RNA-Seq.
Diceseq Statistical modeling of isoform splicing dynamics from RNA-seq time series data.
EBChangepoint An empirical Bayes change-point model for identifying 3′ and 5′ alternative splicing by RNA-Seq.
Eoulsan A versatile framework dedicated to high throughput sequencing data analysis. Allows automated analysis (mapping, counting and differencial analysis with DESeq2).
GESS for de novo detection of exon-skipping event sites from raw RNA-seq reads.
LeafCutter a suite of novel methods that allow identification and quantication of novel and existing alternative splicing events by focusing on intron excisions.
LEMONS A Tool for the Identification of Splice Junctions in Transcriptomes of Organisms Lacking Reference Genomes.
MAJIQ. Modeling Alternative Junction Inclusion Quantification.
MATS Multivariate Analysis of Transcript Splicing (MATS).
MISO quantifies the expression level of splice variants from RNA-Seq data and is able to recognize differentially regulated exons/isoforms across different samples. MISO uses a probabilistic method (Bayesian inference) to calculate the probability of the reads origin.
Rail-RNA Scalable analysis of RNA-seq splicing and coverage.
RPASuite RPASuite (RNA Processing Analysis Suite) is a computational pipeline to identify differentially and coherently processed transcripts using RNA-seq data obtained from multiple tissue or cell lines.
RSVP RSVP is a software package for prediction of alternative isoforms of protein-coding genes, based on both genomic DNA evidence and aligned RNA-seq reads. The method is based on the use of ORF graphs, which are more general than the splice graphs used in traditional transcript assembly.
SAJR calculates the number of the reads that confirms segment (part of gene between two nearest splice sites) inclusion or exclusion and then model these counts by GLM with quasibinomial distribution to account for biological variability.
SGSeq A R package to de novo prediction of splicing events.
SplAdder Identification, quantification and testing of alternative splicing events from RNA-Seq data.
SpliceGrapher Prediction of novel alternative splicing events from RNA-Seq data. Also includes graphical tools for visualizing splice graphs.
SpliceJumper a classification-based approach for calling splicing junctions from RNA-seq data.
SplicePie is a pipeline to analyze non-sequential and multi-step splicing. SplicePie contains three major analysis steps: analyzing the order of splicing per sample, looking for recursive splicing events per sample and summarizing predicted recursive splicing events for all analyzed sample (it is recommended to use more samples for higher reliability). The first two steps are performed individually on each sample and the last step looks at the overlap in all samples. However, the analysis can be run on one sample as well.
SplicePlot is a tool for visualizing alternative splicing and the effects of splicing quantitative trait loci (sQTLs) from RNA-seq data. It provides a simple command line interface for drawing sashimi plots, hive plots, and structure plots of alternative splicing events from .bam, .gtf, and .vcf files.
SpliceR An R package for classification of alternative splicing and prediction of coding potential from RNA-seq data.
SpliceSEQ SpliceViewer is a Java application that allows researchers to investigate alternative mRNA splicing patterns in data from high-throughput mRNA sequencing studies. Sequence reads are mapped to splice graphs that unambiguously quantify the inclusion level of each exon and splice junction. The graphs are then traversed to predict the protein isoforms that are likely to result from the observed exon and splice junction reads. UniProt annotations are mapped to each protein isoform to identify potential functional impacts of alternative splicing.
SpliceTrap is a statistical tool for the quantification of exon inclusion ratios from RNA-seq data.
Splicing Express – a software suite for alternative splicing analysis using next-generation sequencing data.
SUPPA This tool generates different Alternative Splicing (AS) events and calculates the PSI ("Percentage Spliced In") value for each event exploiting the quantification of transcript abundances from multiple samples.
SwitchSeq identifies extreme changes in splicing (switch events).
Portcullis identification of genuine splice junctions.
TrueSight A Self-training Algorithm for Splice Junction Detection using RNA-seq.
Vast-tools A toolset for profiling alternative splicing events in RNA-Seq data.
Intron retention analysis
IRcall / IRclassifier IRcall is a computational tool for IR event detection from RNA-Seq data. IRclassifier is a supervised machine learning-based approach for IR event detection from RNA-Seq data.
Differential isoform/transcript usage
IsoformSwitchAnalyzeR IsoformSwitchAnalyzeR is an R package that enables statistical identification of isoform switches with predicted functional consequences where the consequences of interest can be chosen from a long list but includes gain/loss of protein domains, signal peptides changes in NMD sensitivity. IsoformSwitchAnalyzeR is made for post analysis of data from any full length isoform/transcript quantification tool but directly support Cufflinks/Cuffdiff, RSEM Kallisto and Salmon.
DRIMSeq An R package that utilizes generalized linear modeling (GLM) to identify isoform switches from estimated isoform count data.
BayesDRIMSeq An R package containing a Bayesian implementation of DRIMSeq.
Cufflinks/Cuffdiff Full length isoform/transcript quantification and differential analysis tool which amongst other test for changes in usage for isoform belonging to the same primary transcript (sharing a TSS) via a one-sided t-test based on the asymptotic of the Jensen-Shannon metric.
rSeqNP An R package that implements a non-parametric approach to test for differential expression and splicing from RNA-Seq data.
Isolator Full length isoform/transcript quantification and differential analysis tool which analyses all samples in an experiment in unison using a simple Bayesian hierarchical model. Can identify differential isoform usage by testing for probability of monotonic splicing.
Fusion genes/chimeras/translocation finders/structural variations
Genome arrangements result of diseases like cancer can produce aberrant genetic modifications like fusions or translocations. Identification of these modifications play important role in carcinogenesis studies.
Arriba is a fusion detection algorithm based on the STAR RNA-Seq aligner. It is the winner of the DREAM Challenge about fusion detection. Arriba can also detect viral integration sites, internal tandem duplications, whole exon duplications, circular RNAs, enhancer hijacking events involving immunoglobulin/T-cell receptor loci, and breakpoints in introns or intergenic regions.
Bellerophontes
BreakDancer
BreakFusion
ChimeraScan
EBARDenovo
EricScript
DEEPEST is a statistical fusion detection algorithm. DEEPEST can also detect Circular RNAs.
DeFuse DeFuse is a software package for gene fusion discovery using RNA-Seq data.
Dr. Disco Dr. Disco is a fusion detector that takes into account the entire reference genome and is therefore also able to detect genomic breakpoints. It is therefore in particular suited for rRNA-minus RNA-seq.
egfr-v3-determiner EGFR-v3-determiner is a tool that counts EGFRvIII and EGFRwt splice/structural variant directly from alignment files.
FusionAnalyser FusionAnalyser uses paired reads mapping to different genes (Bridge reads).
FusionCatcher FusionCatcher searches for novel/known somatic fusion genes, translocations, and chimeras in RNA-seq data (stranded/unstranded paired-end reads from Illumina NGS platforms) from diseased samples.
FusionHunter identifies fusion transcripts without depending on already known annotations. It uses Bowtie as a first aligner and paired-end reads.
FusionMap FusionMap is a fusion aligner which aligns reads spanning fusion junctions directly to the genome without prior knowledge of potential fusion regions. It detects and characterizes fusion junctions at base-pair resolution. FusionMap can be applied to detect fusion junctions in both single- and paired-end dataset from either gDNA-Seq or RNA-Seq studies.
FusionSeq
JAFFA is based on the idea of comparing a transcriptome against a reference transcriptome rather than a genome-centric approach like other fusion finders.
MapSplice
nFuse
Oncomine NGS RNA-Seq Gene Expression Browser.
PRADA
SOAPFuse detects fusion transcripts from human paired-end RNA-Seq data. It outperforms other five similar tools in both computation and fusion detection performance using both real and simulated data.
SOAPfusion
TopHat-Fusion is based on TopHat version and was developed to handle reads resulting from fusion genes. It does not require previous data about known genes and uses Bowtie to align continuous reads.
ViralFusionSeq is high-throughput sequencing (HTS) tool for discovering viral integration events and reconstruct fusion transcripts at single-base resolution.
ViReMa (Viral Recombination Mapper) detects and reports recombination or fusion events in and between virus and host genomes using deep sequencing datasets.
Copy number variation identification
CNVseq detects copy number variations supported on a statistical model derived from array-comparative genomic hybridization. Sequences alignment are performed by BLAT, calculations are executed by R modules and is fully automated using Perl. There are few other bioinformatics tools that can call CNA from RNA-Seq.
Single cell RNA-Seq
Single cell sequencing. The traditional RNA-Seq methodology is commonly known as "bulk RNA-Seq", in this case RNA is extracted from a group of cells or tissues, not from the individual cell like it happens in single cell methods. Some tools available to bulk RNA-Seq are also applied to single cell analysis, however to face the specificity of this technique new algorithms were developed.
CEL-Seq single-cell RNA-Seq by multiplexed linear amplification.
Drop-Seq Highly Parallel Genome-wide Expression Profiling of Individual Cells Using Nanoliter Droplets.
FISSEQ Single cell transcriptome sequencing in situ, i.e. without dissociating the cells.
Oscope: a statistical pipeline for identifying oscillatory genes in unsynchronized single cell RNA-seq experiments.
SCUBA Extracting lineage relationships and modeling dynamic changes associated with multi-lineage cell differentiation.
scLVM scLVM is a modelling framework for single-cell RNA-seq data that can be used to dissect the observed heterogeneity into different sources, thereby allowing for the correction of confounding sources of variation.scM&T-Seq Parallel single-cell sequencing.
Sphinx SPHINX is a hybrid binning approach that achieves high binning efficiency by utilizing both 'compositional' and 'similarity' features of the query sequence during the binning process. SPHINX can analyze sequences in metagenomic data sets as rapidly as composition based approaches, but nevertheless has the accuracy and specificity of similarity based algorithms.
TraCeR Paired T-cell receptor reconstruction from single-cell RNA-Seq reads.
VDJPuzzle T-cell receptor reconstruction from single-cell RNA-Seq reads and link the clonotype with the functional phenotype and transcriptome of individual cells.
Integrated Packages
Monocle Differential expression and time-series analysis for single-cell RNA-Seq and qPCR experiments.
SCANPY Scalable Python-based implementation for preprocessing, visualization, clustering, trajectory inference and differential expression testing.
SCell integrated analysis of single-cell RNA-seq data.
Seurat R package designed for QC, analysis, and exploration of single-cell RNA-seq data.
Sincell an R/Bioconductor package for statistical assessment of cell-state hierarchies from single-cell RNA-seq.
SINCERA A Pipeline for Single-Cell RNA-Seq Profiling Analysis.
Quality Control and Gene Filtering
Celloline A pipeline for mapping and quality assessment single cell RNA-seq data.
OEFinder A user interface to identify and visualize ordering effects in single-cell RNA-seq data.
SinQC A Method and Tool to Control Single-cell RNA-seq Data Quality.
Normalization
BASiCS Understanding changes in gene expression at the single-cell level.
GRM Normalization and noise reduction for single cell RNA-seq experiments.
Dimension Reduction
ZIFA Dimensionality reduction for zero-inflated single-cell gene expression analysis.
Differential Expression
BPSC An R package BPSC for model fitting and differential expression analyses of single-cell RNA-seq.
MAST a flexible statistical framework for assessing transcriptional changes and characterizing heterogeneity in single-cell RNA sequencing data.
SCDE Characterizing transcriptional heterogeneity through pathway and gene set overdispersion analysis.
Visualization
eXpose
RNA-Seq simulators
These Simulators generate in silico reads and are useful tools to compare and test the efficiency of algorithms developed to handle RNA-Seq data. Moreover, some of them make possible to analyse and model RNA-Seq protocols.
BEERS Simulator is formatted to mouse or human data, and paired-end reads sequenced on Illumina platform. Beers generates reads starting from a pool of gene models coming from different published annotation origins. Some genes are chosen randomly and afterwards are introduced deliberately errors (like indels, base changes and low quality tails), followed by construction of novel splice junctions.
compcodeR RNAseq data simulation, differential expression analysis and performance comparison of differential expression methods.
CuReSim a customized read simulator.
Flux simulator implements a computer pipeline simulation to mimic a RNA-Seq experiment. All component steps that influence RNA-Seq are taken into account (reverse transcription, fragmentation, adapter ligation, PCR amplification, gel segregation and sequencing) in the simulation. These steps present experimental attributes that can be measured, and the approximate experimental biases are captured. Flux Simulator allows joining each of these steps as modules to analyse different type of protocols.
PBSIM PacBio reads simulator - toward accurate genome assembly.
Polyester This bioconductor package can be used to simulate RNA-seq reads from differential expression experiments with replicates. The reads can then be aligned and used to perform comparisons of methods for differential expression.
RandomReads Generates synthetic reads from a genome with an Illumina or PacBio error model. The reads may be paired or unpaired, with arbitrary length and insert size, output in fasta or fastq, RandomReads has a wide selection of options for mutation rates, with individual settings for substitution, deletion, insertion, and N rates and length distributions, annotating reads with their original, unmutated genomic start and stop location. RandomReads is does not vary expression levels and thus is not designed to simulate RNA-seq experiments, but to test the sensitivity and specificity of RNA-seq aligners with de-novo introns. Includes a tool for grading and generating ROC curves from resultant sam files. Open-source, written in pure Java; supports all platforms with no recompilation and no other dependencies. Distributed with BBMap.
rlsim is a software package for simulating RNA-seq library preparation with parameter estimation.
rnaseqbenchmark A Benchmark for RNA-seq Quantification Pipelines.
rnaseqcomp Benchmarks for RNA-seq Quantification Pipelines.
RSEM Read Simulator RSEM provides users the ‘rsem-simulate-reads’ program to simulate RNA-Seq data based on parameters learned from real data sets.
RNASeqReadSimulator contains a set of simple Python scripts, command line driven. It generates random expression levels of transcripts (single or paired-end), equally simulates reads with a specific positional bias pattern and generates random errors from sequencing platforms.
RNA Seq Simulator RSS takes SAM alignment files from RNA-Seq data and simulates over dispersed, multiple replica, differential, non-stranded RNA-Seq datasets.
SimSeq A Nonparametric Approach to Simulation of RNA-Sequence Datasets.
WGsim Wgsim is a small tool for simulating sequence reads from a reference genome. It is able to simulate diploid genomes with SNPs and insertion/deletion (INDEL) polymorphisms, and simulate reads with uniform substitution sequencing errors. It does not generate INDEL sequencing errors, but this can be partly compensated by simulating INDEL polymorphisms.
Transcriptome assemblers
The transcriptome is the total population of RNAs expressed in one cell or group of cells, including non-coding and protein-coding RNAs.
There are two types of approaches to assemble transcriptomes. Genome-guided methods use a reference genome (if possible a finished and high quality genome) as a template to align and assembling reads into transcripts. Genome-independent methods does not require a reference genome and are normally used when a genome is not available. In this case reads are assembled directly in transcripts.
Genome-guided assemblers
Bayesembler Bayesian transcriptome assembly.
CIDANE a comprehensive isoform discovery and abundance estimation.
CLASS CLASS is a program for assembling transcripts from RNA-seq reads aligned to a genome. CLASS produces a set of transcripts in three stages. Stage 1 uses linear programming to determine a set of exons for each gene. Stage 2 builds a splice graph representation of a gene, by connecting the exons (vertices) via introns (edges) extracted from spliced read alignments. Stage 3 selects a subset of the candidate transcripts encoded in the graph that can explain all the reads, using either a parsimonius (SET_COVER) or a dynamic programming optimization approach. This stage takes into account constraints derived from mate pairs and spliced alignments and, optionally, knowledge about gene structure extracted from known annotation or alignments of cDNA sequences.
Cufflinks Cufflinks assembles transcripts, estimates their abundances, and tests for differential expression and regulation in RNA-Seq samples. It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts. Cufflinks then estimates the relative abundances of these transcripts based on how many reads support each one, taking into account biases in library preparation protocols.
iReckon iReckon is an algorithm for the simultaneous isoform reconstruction and abundance estimation. In addition to modelling novel isoforms, multi-mapped reads and read duplicates, this method takes into account the possible presence of unspliced pre-mRNA and intron retention. iReckon only requires a set of transcription start and end sites, but can use known full isoforms to improve sensitivity. Starting from the set of nearly all possible isoforms, iReckon uses a regularized EM algorithm to determine those actually present in the sequenced sample, together with their abundances. iReckon is multi-threaded to increase efficiency in all its time-consuming steps.
IsoInfer IsoInfer is a C/C++ program to infer isoforms based on short RNA-Seq (single-end and paired-end) reads, exon-intron boundary and TSS/PAS information.
IsoLasso IsoLasso is an algorithm to assemble transcripts and estimate their expression levels from RNA-Seq reads.
Flipflop FlipFlop implements a method for de novo transcript discovery and abundance estimation from RNA-Seq data. It differs from Cufflinks by simultaneously performing the identification and quantitation tasks using a convex penalized maximum likelihood approach.
GIIRA GIIRA is a gene prediction method that identifies potential coding regions exclusively based on the mapping of reads from an RNA-Seq experiment. It was foremost designed for prokaryotic gene prediction and is able to resolve genes within the expressed region of an operon. However, it is also applicable to eukaryotes and predicts exon intron structures as well as alternative isoforms.
MITIE Simultaneous RNA-Seq-based Transcript Identification and Quantification in Multiple Samples.
RNAeXpress RNA-eXpress was designed as a user friendly solution to extract and annotate biologically important transcripts from next generation RNA sequencing data. This approach complements existing gene annotation databases by ensuring all transcripts present in the sample are considered for further analysis.
Scripture Scripture is a method for transcriptome reconstruction that relies solely on RNA-Seq reads and an assembled genome to build a transcriptome ab initio. The statistical methods to estimate read coverage significance are also applicable to other sequencing data. Scripture also has modules for ChIP-Seq peak calling.
SLIDE Sparse Linear modeling of RNA-Seq data for Isoform Discovery and abundance Estimation.
Strawberry A program for genome-guided transcripts reconstruction and quantification from paired-end RNA-seq.
StringTie StringTie is an assembler of RNA-Seq alignments into potential transcripts. It uses a novel network flow algorithm as well as an optional de novo assembly step to assemble and quantitate full-length transcripts representing multiple splice variants for each gene locus. Its input can include not only the alignments of raw reads used by other transcript assemblers, but also alignments longer sequences that have been assembled from those reads. To identify differentially expressed genes between experiments, StringTie's output can be processed either by the Cuffdiff or Ballgown programs.
TransComb a genome-guided transcriptome assembly via combing junctions in splicing graphs.
Traph A tool for transcript identification and quantification with RNA-Seq.
Tiling Assembly for Annotation-independent Novel Gene Discovery.
Genome-independent (de novo) assemblers
Bridger was developed at Shandong University, takes advantage of techniques employed in Cufflinks to overcome limitations of the existing de novo assemblers.
CLC de novo assembly algorithm of CLC Genomics Workbench.
KISSPLICE is a software that enables to analyse RNA-seq data with or without a reference genome. It is an exact local transcriptome assembler that allows to identify SNPs, indels and alternative splicing events. It can deal with an arbitrary number of biological conditions, and will quantify each variant in each condition.
Oases De novo transcriptome assembler for very short reads.
rnaSPAdes
Rnnotator an automated de novo transcriptome assembly pipeline from stranded RNA-Seq reads.
SAT-Assembler
SOAPdenovo-Trans
Scaffolding Translation Mapping
Trans-ABySS
T-IDBA
Trinity a method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data. Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads.
Velvet
TransLiG
Assembly evaluation tools
Busco provides quantitative measures for the assessment of genome assembly, gene set, and transcriptome completeness, based on evolutionarily-informed expectations of gene content from near-universal single-copy orthologs selected from OrthoDB tool.
Detonate DETONATE (DE novo TranscriptOme rNa-seq Assembly with or without the Truth Evaluation) consists of two component packages, RSEM-EVAL and REF-EVAL. Both packages are mainly intended to be used to evaluate de novo transcriptome assemblies, although REF-EVAL can be used to compare sets of any kinds of genomic sequences.
rnaQUAST Quality Assessment Tool for Transcriptome Assemblies.
TransRate Transrate is software for de-novo transcriptome assembly quality analysis. It examines your assembly in detail and compares it to experimental evidence such as the sequencing reads, reporting quality scores for contigs and assemblies. This allows you to choose between assemblers and parameters, filter out the bad contigs from an assembly, and help decide when to stop trying to improve the assembly.
Co-expression networks
GeneNetWeaver is an open-source tool for in silico benchmark generation and performance profiling of network inference methods.
WGCNA is an R package for weighted correlation network analysis.
Pigengene is an R package that infers biological information from gene expression profiles. Based on a coexpression network, it computes eigengenes and effectively uses them as features to fit decision trees and Bayesian networks that are useful in diagnosis and prognosis.
miRNA prediction and analysis
iSRAP a one-touch research tool for rapid profiling of small RNA-seq data.
SPAR small RNA-seq, short total RNA-seq, miRNA-seq, single-cell small RNA-seq data processing, analysis, annotation, visualization, and comparison against reference ENCODE and DASHR datasets.
miRDeep2
MIReNA
miRExpress
miR-PREFeR m miRDeep-P For plants
miRDeep
miRPlant
MiRdup
ShortStack An alignment and annotation suite intended for small RNA analysis in plants, noted for its focus on high-confidence annotations
Visualization tools
ABrowse a customizable next-generation genome browser framework.
Artemis Artemis is a free genome browser and annotation tool that allows visualisation of sequence features, next generation data and the results of analyses within the context of the sequence, and also its six-frame translation.
Apollo Apollo is designed to support geographically dispersed researchers, and the work of a distributed community is coordinated through automatic synchronization: all edits in one client are instantly pushed to all other clients, allowing users to see annotation updates from collaborators in real-time during the editing process.
BamView BamView is a free interactive display of read alignments in BAM data files. It has been developed by the Pathogen Group at the Sanger Institute.
BrowserGenome: web-based RNA-seq data analysis and visualization.
Degust An interactive web tool for visualising Differential Gene Expression data.
DensityMap is a perl tool for the visualization of features density along chromosomes.
EagleView EagleView is an information-rich genome assembler viewer with data integration capability. EagleView can display a dozen different types of information including base qualities, machine specific trace signals, and genome feature annotations.
expvip-web a customisable RNA-seq data analysis and visualisation platform.
GBrowse
Integrated Genome Browser
Integrative Genomics Viewer (IGV)
GenomeView
MapView
MicroScope comprehensive genome analysis software suite for gene expression heatmaps.
ReadXplorer ReadXplorer is a freely available comprehensive exploration and evaluation tool for NGS data. It extracts and adds quantity and quality measures to each alignment in order to classify the mapped reads. This classification is then taken into account for the different data views and all supported automatic analysis functions.
RNASeqExpressionBrowser is a web-based tool which provides means for the search and visualization of RNA-seq expression data (e.g. based on sequence-information or domain annotations). It can generate detailed reports for selected genes including expression data and associated annotations. If needed, links to (publicly available) databases can be easily integrated. The RNASeqExpressionBrowser allows password protection and thereby access restriction to authorized users only.
Savant Savant is a next-generation genome browser designed for the latest generation of genome data.
Samscope
SeqMonk
Tablet TTablet is a lightweight, high-performance graphical viewer for next generation sequence assemblies and alignments.
Tbrowse- HTML5 Transcriptome Browser
TBro a transcriptome browser for de novo RNA-sequencing experiments.
Vespa
Functional, network and pathway analysis tools
BioCyc Visualize RNA-seq data onto individual pathway diagrams, multi-pathway diagrams called pathway collages, and zoomable organism-specific metabolic map diagrams. Computes pathway enrichment.
BRANE Clust Biologically-Related Apriori Network Enhancement for Gene Regulatory Network Inference combined with clustering.
BRANE Cut Biologically-Related Apriori Network Enhancement with Graph cuts for Gene Regulatory Network Inference.
FunRichFunctional Enrichment analysis tool.
GAGE is applicable independent of sample sizes, experimental design, assay platforms, and other types of heterogeneity. This Biocondutor package also provides functions and data for pathway, GO and gene set analysis in general.
Gene Set Association Analysis for RNA-Seq GSAASeq are computational methods that assess the differential expression of a pathway/gene set between two biological states based on sequence count data.
GeneSCF a real-time based functional enrichment tool with support for multiple organisms.
GOexpress Visualise microarray and RNAseq data using gene ontology annotations.
GOSeq Gene Ontology analyser for RNA-seq and other length biased data.
GSAASEQSP A Toolset for Gene Set Association Analysis of RNA-Seq Data.
GSVA gene set variation analysis for microarray and RNA-Seq data.
Heat*Seq an interactive web tool for high-throughput sequencing experiment comparison with public data.
Ingenuity Systems (commercial) iReport & IPA
PathwaySeq Pathway analysis for RNA-Seq data using a score-based approach.
petal Co-expression network modelling in R.
ToPASeq: an R package for topology-based pathway analysis of microarray and RNA-Seq data.
RNA-Enrich A cut-off free functional enrichment testing method for RNA-seq with improved detection power.
TRAPID Rapid Analysis of Transcriptome Data.
T-REx RNA-seq expression analysis.
Further annotation tools for RNA-Seq data
Frama From RNA-seq data to annotated mRNA assemblies.
HLAminer is a computational method for identifying HLA alleles directly from whole genome, exome and transcriptome shotgun sequence datasets. HLA allele predictions are derived by targeted assembly of shotgun sequence data and comparison to a database of reference allele sequences. This tool is developed in perl and it is available as console tool.
pasaPASA, acronym for Program to Assemble Spliced Alignments, is a eukaryotic genome annotation tool that exploits spliced alignments of expressed transcript sequences to automatically model gene structures, and to maintain gene structure annotation consistent with the most recently available experimental sequence data. PASA also identifies and classifies all splicing variations supported by the transcript alignments.
seq2HLA is an annotation tool for obtaining an individual's HLA class I and II type and expression using standard NGS RNA-Seq data in fastq format. It comprises mapping RNA-Seq reads against a reference database of HLA alleles using bowtie, determining and reporting HLA type, confidence score and locus-specific expression level. This tool is developed in Python and R. It is available as console tool or Galaxy module.
RNA-Seq databases
ARCHS4 Uniformly processed RNA-seq data from GEO/SRA (>300,000 samples) with metadata search to locate subsets of published samples.
ENA The European Nucleotide Archive (ENA) provides a comprehensive record of the world's nucleotide sequencing information, covering raw sequencing data, sequence assembly information and functional annotation.
ENCODE
queryable-rna-seq-database Formally known as the Queryable RNA-Seq Database, this system is designed to simplify the process of RNA-seq analysis by providing the ability upload the result data from RNA-Seq analysis into a database, store it, and query it in many different ways.
CIRCpedia v2 is an updated comprehensive database containing circRNA annotations from over 180 RNA-seq datasets across six different species. This atlas allows users to search, browse and download circRNAs with expression characteristics/features in various cell types/tissues, including disease samples. In addition, the updated database incorporates conservation analysis of circRNAs between humans and mice.
Human related
Brain RNA-Seq An RNA-Seq transcriptome and splicing database of glia, neurons, and vascular cells of the cerebral cortex.
FusionCancer a database of cancer fusion genes derived from RNA-seq data.
Hipposeq a comprehensive RNA-seq database of gene expression in hippocampal principal neurons.
Mitranscriptome is a systematic list of long poly-adenylated Human RNA transcripts based on RNA-Seq data from more than 6,500 samples associated with a variety of cancer and tissue types. The database contains detailed gene expression analysis of over 91,000 genes, most are uncharacterized long RNAs.
RNA-Seq Atlas a reference database for gene expression profiling in normal tissue by next-generation sequencing.
SRA The Sequence Read Archive (SRA) stores raw sequence data from "next-generation" sequencing technologies including 454, IonTorrent, Illumina, SOLiD, Helicos and Complete Genomics. In addition to raw sequence data, SRA now stores alignment information in the form of read placements on a reference sequence.
DASHR A database of human small RNA genes and mature products derived from small RNA-seq data.
Single species' RNA-Seq databases
Aedes-albopictus Aedes albopictus database.
Arabidopsis thaliana TraVa the database of gene expression profiles in Arabidopsis thaliana based on RNA-seq analysis.
Barley morexGe
EORNA, a barley gene and transcript abundance database (The James Hutton Institute).
Chickpea Chickpea transcriptome database (CTDB) has been developed with the view to provide most comprehensive information about the chickpea transcriptome, the most relevant part of the genome".
Chilo suppressalis ChiloDB: a genomic and transcriptome database for an important rice insect pest Chilo suppressalis.
Fruit fly FlyAtlas 2 – Drosophila melanogaster RNA-seq database.
Echinoderm EchinoDB – a repository of orthologous transcripts from echinoderms.
Equine transcriptome (University of California, Davis).
Escherichia coli Ecomics – an omics normalized database for Escherichia coli.
Fish Phylofish.
Ginger Ginger - Ginger transcriptome database.
Lygodium japonicum Lygodium japonicum Transcriptome Database.
Mammals Mammalian Transcriptomic Database.
Oyster (Pacific) GigaTon: an extensive publicly searchable database providing a new reference transcriptome in the pacific oyster Crassostrea gigas.
Mouse and Human PanglaoDB: A gene expression database for exploration and meta-analysis of single cell sequencing data.
Mangrove Mangrove Transcriptome Database.
Krill (Antarctic) KrillDB: a de novo Transcriptome Database for the Antarctic Krill.
Mouse RNASeqMetaDB: a database and web server for navigating metadata of publicly available mouse RNA-Seq data sets.Rubus Rubus GDR RefTrans V1 - GDR Rubus RefTrans combines published RNA-Seq and EST data sets to create a reference transcriptome (RefTrans) for rubus and provides putative gene function identified by homology to known proteins.
Sorghum MOROKOSHI Sorghum transcriptome database. RIKEN full-length cDNA clone and RNA-Seq data in Sorghum bicolor.
S. purpuratus S. purpuratus - Developmental Transcriptomes of S. purpuratus
S. cerevisiae YeastMine transcriptome database.
Wheat WheatExp – an RNA-seq expression database for polyploid wheat.
References
External links
RNASeq-Blog Presentations RNA-Seq Workshop Documentation (UC Davis University) Princeton Workshop YouTube/RNA-Seq RNA-Seq Presentations from GSK, University of Torino and University of Bath''.
RNA seq
RNA-Seq Bioinformatic |
3758144 | https://en.wikipedia.org/wiki/CDC%201604 | CDC 1604 | The CDC 1604 was a 48-bit computer designed and manufactured by Seymour Cray and his team at the Control Data Corporation (CDC). The 1604 is known as one of the first commercially successful transistorized computers. (The IBM 7090 was delivered earlier, in November 1959.) Legend has it that the 1604 designation was chosen by adding CDC's first street address (501 Park Avenue) to Cray's former project, the ERA-UNIVAC 1103.
A cut-down 24-bit version, designated the CDC 924, was shortly thereafter produced, and delivered to NASA.
The first 1604 was delivered to the U.S. Navy Post Graduate School in January 1960 for applications supporting major Fleet Operations Control Centers primarily for weather prediction in Hawaii, London, and Norfolk, Virginia. By 1964, over 50 systems were built. The CDC 3600, which added five op codes, succeeded the 1604, and "was largely compatible" with it.
One of the 1604s was shipped to the Pentagon to DASA (Defense Atomic Support Agency) and used during the Cuban missile crises to predict possible strikes by the Soviet Union against the United States.
A 12-bit minicomputer, called the CDC 160, was often used as an I/O processor in 1604 systems. A stand-alone version of the 160 called the CDC 160-A was arguably the first minicomputer.
Architecture
Memory in the CDC 1604 consisted of 32K 48-bit words of magnetic core memory with a cycle time of 6.4 microseconds. It was organized as two banks of 16K words each, with odd addresses in one bank and even addresses in the other. The two banks were phased 3.2 microseconds apart, so average effective memory access time was 4.8 microseconds. The computer executed about 100,000 operations per second.
Each 48-bit word contained two 24-bit instructions. The instruction format was 6-3-15: six bits for the operation code, three bits for a "designator" (index register for memory access instructions, condition for jump (branch) instructions) and fifteen bits for a memory address (or shift count, for shift instructions).
The CPU contained a 48-bit accumulator (A), a 48-bit Auxiliary Arithmetic register (Q), a 15-bit program counter (P), and six 15-bit index registers (1-6). The Q register was usually used in conjunction with A for forming a double-length register AQ or QA, participating with A in multiplication, division and logical product (masking) operations, and temporary storage of A's contents while using A for another operation.
Internal integer representation used ones' complement arithmetic. Internal floating point format was 1-11-36: one bit of sign, eleven bits of offset (biased) binary exponent, and thirty-six bits of binary significand.
The most-significant three bits of the accumulator were converted from digital to analog and connected to a tube audio amplifier contained in the console. This facility could be used to program audio alerts for the computer operator, or to generate music. Those familiar with the inner workings of the software could often hear what parts of a task were being performed by the CDC 1604; as a debugging aid, for example, a never-ending repetitive musical phrase indicated the program was stuck in a loop.
Uses and applications
In 1960, one of the first text-mining applications, Masquerade, was written for the Marathon Oil Company in Findlay, Ohio. Masquerade was a text-mining program that used syntactic structures underlying text data to mask out words and phrases for searching purposes.
During 1969, Fleet Operations Control Center, Pacific (FOCCPAC at Kunia) on Oahu in Hawaii launched an Automated Control Environment (ACE) using a cluster of five CDC 160As to supervise a multi-tasking network of four CDC 1604s.
The Minuteman I was the first U.S. solid-rocket ICBM system to be fielded. There were two entirely separate ground station designs which were developed independently. The smaller, more elegant, single silo design incorporated two redundant CDC 1604 computer systems, each equipped with dual cabinets containing four 200 bpi magnetic tape drives. The computers were used to pre-compute guidance and aiming control information. Results based on current weather and targeting information were downloaded into the missile prior to launch. Model displays of both of these ICBM ground station designs, including block models of the CDC 1604 computers, may be viewed at the Octave Chanute Aerospace Museum in Rantoul, Illinois.
The third version of the PLATO computer-based educational system was implemented on a CDC 1604-C.
JOVIAL was used as the main programming language of the CDC 1604, while octal was used to program shared services supported by the CDC 160A. NAVCOSSACT based at the Washington Navy Yard provided systems and training support.
The CDC 1604 was used to compose Sailboat and other artworks by Sam Schmitt and Stockton Gaines.
Similar machines
The 1604 design was used by the Soviet nuclear weapons laboratory. Their BESM-6 computer, which entered production in 1968, was designed to be somewhat software compatible with the CDC 1604, but it ran 10 times faster and had additional registers.
The 924
The CDC 924 was a 24-bit computer that supported the use of "any input-output devices capable of communicating with the 160 and/or 1604 computer," and its six independent channels permitted 3 simultaneous input operations even as 3 channels concurrently performed output.
Like many CDC processors, it used ones' complement arithmetic.
Some advanced features of the 924, which included 64 instructions, were:
Six index registers. The value "7" was reserved to indicate indirect-addressing.
an execute instruction (in what the hardware reference manual called "a subroutine of a single instruction").
powerful Storage Search instructions.
References
External links
Neil R. Lincoln with 18 Control Data Corporation (CDC) engineers on computer architecture and design, Charles Babbage Institute, University of Minnesota. Engineers include Robert Moe, Wayne Specker, Dennis Grinna, Tom Rowan, Maurice Hutson, Curt Alexander, Don Pagelkopf, Maris Bergmanis, Dolan Toth, Chuck Hawley, Larry Krueger, Mike Pavlov, Dave Resnick, Howard Krohn, Bill Bhend, Kent Steiner, Raymon Kort, and Neil R. Lincoln. Discussion topics include CDC 1604, CDC 6600, CDC 7600, CDC 8600, CDC STAR-100 and Seymour Cray.
On-line copies of CDC 1604 manuals.
Further reading
Addressability strengths of 24 and 48 bit designs
Photos
CDC 1604
1604 in Cray Computer Museum
same museum's 1604, different view
1604
Control Data mainframe computers
Transistorized computers
48-bit computers
24-bit computers |
43134035 | https://en.wikipedia.org/wiki/Maps.me | Maps.me | Maps.me (styled as MAPS.ME) is a mobile app for Android, iOS and BlackBerry that provides offline maps using OpenStreetMap data. It was formerly known as MapsWithMe. In November 2014, it was acquired by Mail.Ru Group and became part of its My.com brand. In September 2015, the app was open sourced and a free and open-source software version was additionally made available on F-droid until the application was sold to the payment processor Daegu Limited, part of Parity.com, which changed the application user interface and content, which led the free software community to develop an ad- and tracker-free fork in response.
History
First, the whole team has developed an application in Belarus and Switzerland. Maps.me was founded by Yury Melnichek, Alexander Borsuk, Viktor Govako and Siarhei Rachytski. Under the leadership of Alexander, MapsWithMe got its first 2.5M users worldwide. Yury Melnichek was leading the project from November 2013 until April 2016 when Evgeny Lisovskiy took over. In early 2017 the philosophy of the app changed, leading it to be supported by adverts.
On Nov 2020 Mail.ru Group sold Maps.me to the payment processor Daegu Limited, part of Parity.com, which changed the application user interface and content, which led the free software community to develop OrganicMaps in response.
MapsWithMe GmbH
The application was initially developed by Zurich-based MapsWithMe GmbH with a development office in Minsk.
In 2012, MapsWithMe came in first in the Startup Monthly competition in Vilnius. The team won a nine-week traineeship in Silicon Valley as a prize.
Mail.ru Group
In November 2014, Maps.me was acquired by Mail.Ru Group for 542 million rubles (around US$14 million at that time) to be integrated with My.com, and the app was made free of charge. The engineering team was relocated to the Mail.Ru Group office in Moscow to continue working on the project.
In 2019, its revenue amounted to 159 million rubles (US$2.5 million) with an EBITDA loss of 25 million rubles (US$0.39 million).
Daegu Ltd and partners
On November 2 2020, Daegu Limited bought Maps.me for 1.56 Billion Russian rubles (some US$20 million at 2020’s exchange rate). Daegu Limited is announced to be part of Parity.com Group.
Regarding the question, who is really responsible, there are two main groups involved.
The first is Parity.com AG, which is part of Convexity Holdings AG from Switzerland.
The second is the global TMF Group, a partner of Parity.com Group, which has a dependence office in Cyprus.
Regarding the App Stores from Android and Apple, there is Maps.Me (Cyprus) Limited and Stolmo Limited. Regarding the policy of maps.me both fit to the responsibility to TMF Group in the same office in Nicosia.
According to the search engines there is still no visible proof of what Daegu Limited is and if Parity.com AG is the same as Parity.com Group.
In summary there are currently five companies involved: Daegu Ltd, Parity.com AG/Group, TMF Group, Stolmo Ltd, Maps.me (Cyprus) Ltd.
See also
Comparison of commercial GPS software
References
External links
Free and open-source Android software
IOS software
Mobile route-planning software
Satellite navigation software
OpenStreetMap |
19043895 | https://en.wikipedia.org/wiki/TTEthernet | TTEthernet | The Time-Triggered Ethernet (SAE AS6802) (also known as TTEthernet or TTE) standard defines a fault-tolerant synchronization strategy for building and maintaining synchronized time in Ethernet networks, and outlines mechanisms required for synchronous time-triggered packet switching for critical integrated applications, IMA and integrated modular architectures. SAE International has released SAE AS6802 in November 2011.
Time-Triggered Ethernet network devices are Ethernet devices which at least implement:
SAE AS6802 synchronization services for advanced integrated architectures, fail-operational and safety-critical systems
time-triggered traffic flow control with traffic scheduling
per-flow policing of packet timing for time-triggered traffic
robust internal architecture with traffic partitioning
TTEthernet network devices are standard Ethernet devices with additional capability to configure and establish robust synchronization, synchronous packet switching, traffic scheduling and bandwidth partitioning, as described in SAE AS6802. If no time-triggered traffic capability is configured or used, operate as full duplex switched Ethernet devices compliant with IEEE802.3 and IEEE802.1 standards.
In addition, such network devices implement other deterministic traffic classes to enable mixed-criticality Ethernet networking. Therefore, TTEthernet networks are designed to host different Ethernet traffic classes without interference.
TTEthernet device implementation expands standard Ethernet with services to meet time-critical, deterministic or safety-relevant requirements in double- and triple-redundant configurations for advanced integrated systems. TTEthernet switching devices are used for integrated systems and safety-related applications primarily in the aerospace, industrial controls and automotive applications.
TTEthernet has been selected by NASA and ESA as the technology for communications between the Orion MPCV and the European Service Module, and is described by the ESA as being "prime choice for future launchers allowing them to deploy distributed modular avionics concepts".
Description
TTEthernet network devices implement OSI Layer 2 services, and therefore it claims to be compatible with IEEE 802.3 standards and coexist with other Ethernet networks and services or traffic classes, such as IEEE 802.1Q, on the same device.
Three traffic classes and message types are provided in current TTEthernet switch implementations:
Synchronization Traffic (Protocol Control Frames - PCF): Time-Triggered Ethernet network uses protocol control frames (PCFs) to establish and maintain synchronization. The PCFs traffic has the highest priority and it is similar to rate-constrained traffic. PCF traffic establishes a well-defined interface for fault-tolerant clock synchronization algorithms.
Time-triggered traffic: Ethernet packets are sent over the network at predefined (scheduled) times and take precedence over all other traffic types. The occurrence, temporal delay and precision of time-triggered messages are predefined and guaranteed. Also, "synchronized local clocks are the fundamental prerequisite for time-triggered communication".
Rate-constrained traffic: Ethernet packets are configured so that they can keep maximum latency and jitter in a closed system. They are used for applications with less stringent determinism and real-time requirements. This traffic class guarantees that bandwidth is predefined for each application and delays and temporal deviations have defined upper bounds.
Best-effort traffic (incl. VLAN traffic): Packets are sent via FIFO queues to egress ports. There is no absolute guarantee whether and when these messages can be transmitted, what delays occur and if messages arrive at the recipient. Best-effort messages use the remaining bandwidth of the network and have lower priority than the other two types.
Three traffic classes cover different types of determinism - from soft-time best-effort traffic to "more deterministic" to "very deterministic" (max.latency defined per VL) to "strictly determnistic" (fixed latency, µs-jitter), thus creating a deterministic unified Ethernet networking technology. While standard full duplex switched Ethernet is typically best effort or more deterministic, time-triggered traffic is bound only to the system time progression and traffic scheduling, and not to priorities. It can be considered the highest priority traffic, above the highest priority 802.1Q VLAN traffic.
Fault-tolerance
TTEthernet (i.e. Ethernet switch with SAE AS6802) integrates a model of fault-tolerance and failure management . TTEthernet switch can implement a reliable redundancy management and dataflow (datastream) integration to assure message transmission even in case of a switch failure. The SAE AS6802 implemented on an Ethernet switch supports the design of synchronous system architectures with defined fault-hypothesis.
The single-failure hypothesis, dual-failure hypothesis, and tolerance against arbitrary synchronization disturbances define the basic fault-tolerance concept in a Time-Triggered Ethernet (SAE AS6802-based) network.
Under the single-failure hypothesis, Time-Triggered Ethernet (SAE AS6802) is intended to tolerate either the fail-arbitrary failure of an end system or the fail-inconsistent-omission failure of a switch. The switches in Time-Triggered Ethernet network can be configured to execute a central bus guardian function. The central bus guardian function ensures that even if a set of end systems becomes arbitrarily faulty, it masks the system-wide impact of these faulty end systems by transforming the fail-arbitrary failure mode into an inconsistent-omission failure mode. The arbitrarily faulty failure mode also includes so called "babbling-idiot" behavior. Time-Triggered Ethernet switches therefore establish fault-containment boundaries.
Under the dual-failure hypothesis, Time-Triggered Ethernet networks are intended to tolerate two fail-inconsistent-omission faulty devices. These devices may be two end systems, two switches, or an end system and a switch. The last failure scenario (i.e., end system and switch failure) means that Time-Triggered Ethernet network tolerates an inconsistent communication path between end systems. This failure mode is one of the most difficult to overcome.
Time-Triggered Ethernet networks are intended to tolerate transient synchronization disturbances, even in the presence of permanent failures. Under both single- and dual-failure hypothesis, Time-Triggered Ethernet provides self-stabilization properties. Self-stabilization means that synchronization can reestablish itself, even after a transient upset in a multitude of devices in the distributed computer network.
Performance
Time-Triggered Traffic
Time-triggered traffic is scheduled periodically, and depending on the architecture, line speed (e.g. 1GbE), topology and computing model with control loops operating at 0.1-5(+) kHz, using a time-triggered architecture (TTA) model of computation and communication. Hard real-time is possible at application level due to strict determinism, jitter control and alignment/synchronization between tasks and scheduled network messaging.
In L-TTA (Loosely TTA) architectures with synchronous TTEthernet network, but with local computer clocks decoupled from system/network time the performance of control loops may be limited. In this case, time-triggered transmissions are necessarily cyclically scheduled and thus delays between processes in the application layer can be large, e.g. with plesiochronous processes operating on their own local clock and execution cycle, as is observed in systems using cyclic MIL-STD-1553B buses, up to twice the transmission interval due to released packets waiting for scheduled transmission at the source and for the receiving process to run at the destination.
Rate-Constrained Traffic
Rate constrained traffic is another periodic time-sensitive traffic class, and it shall be modeled to align with time-triggered traffic (and vice versa) in order to fulfill maximum latency and jitter requirements. However, even where the sum of the allocated bandwidths is less than the capacity provided at every point in the network, delivery is still not guaranteed due, e.g., to potential buffer overflows at switch queues, etc., which simple limitation of bandwidths does not guarantee are avoided.
Best Effort Traffic
Best effort traffic will utilize network bandwidth not used by rate-constrained and time-triggered traffic.
In TTEthernet devices, this traffic class cannot interfere with deterministic traffic, as it resides in its own separate buffer memory. Moreover, it implements internal architecture which isolates best effort traffic on partitioned ports, from the traffic assigned to other ports. This mechanism can be associated with fine-grained IP traffic policing, to enable traffic control which is much more robust than VLANs with FIFO buffering.
History
In 2008, it was announced Honeywell would apply the technology to applications in the aerospace and automation industry.
In 2010 a switch-based implementation was shown to perform better than shared bus systems such as FlexRay for use in automobiles. Since then, Time-Triggered Ethernet has been implemented in different industrial, space and automotive programs and components.
See also
Computer networking
Computer science
Time-Triggered Protocol
Real-time computing
Notes
References
External links
AS6802: Time-Triggered Ethernet
www.tttech.com/ttethernet - TTTech Computertechnik AG
realtime-ethernet.de - Comparison of realtime-ethernet solutions Explanations partly German, partly English
NASA and TTTech partner on space network standards for network centric space operations Military & Aerospace Electronics magazine on TTEthernet
Industrial Ethernet |
2233706 | https://en.wikipedia.org/wiki/MOPAC | MOPAC | MOPAC is a popular computer program used in computational chemistry. It is designed to implement semi-empirical quantum chemistry algorithms, and it runs on Windows, Mac, and Linux.
MOPAC2016 is the current version. MOPAC2016 is able to perform calculations on small molecules and enzymes using PM7, PM6, PM3, AM1, MNDO, and RM1. The Sparkle model (for lanthanide chemistry) is also available. Academic users can use this program for free, whereas government and commercial users must purchase the software.
MOPAC was largely written by Michael Dewar's research group at the University of Texas at Austin. Its name is derived from Molecular Orbital PACkage, and it is also a pun on the Mopac Expressway that runs around Austin.
MOPAC2007 included the new Sparkle/AM1, Sparkle/PM3, RM1 and PM6 models, with an increased emphasis on solid state capabilities. However, it does not have yet MINDO/3, PM5, analytical derivatives, the Tomasi solvation model and intersystem crossing. MOPAC2007 was followed by the release of MOPAC2009 in 2008 which presents many improved features
The latest versions are no longer public domain software as were the earlier versions such as MOPAC6 and MOPAC7. However, there are recent efforts to keep MOPAC7 working as open source software. An open source version of MOPAC7 for Linux is also available. The author of MOPAC, James Stewart, released in 2006 a public domain version of MOPAC7 entirely written in Fortran 90 called MOPAC7.1.
See also
Semi-empirical quantum chemistry methods
AMPAC
Quantum chemistry computer programs
References
External links
MOPAC 2016 sales and support information
MOPAC 2002 Manual
MOPAC 2009 Manual
Source code and compiled binaries at the Computational Chemistry List repository:
Source code (in FORTRAN):
MOPAC 6
MOPAC 7
Compiled binaries:
MOPAC 6 for MS-DOS/Windows;
MOPAC 6 for Windows 95/NT;
MOPAC 6 with GUI (Winmostar)
MOPAC 7 for MS-DOS/Windows
MOPAC 7 for Linux
MOPAC 2009 for Linux Windows and Mac
MOPAC-5.022mn (MOPAC at the University of Minnesota)
Computational chemistry software |
2384992 | https://en.wikipedia.org/wiki/Final%20Scratch | Final Scratch | Final Scratch is a DJ tool created by the Dutch company N2IT with input from Richie Hawtin (aka Plastikman) and John Acquaviva that allows manipulation and playback of digital audio sources using traditional vinyl and turntables. It seeks to cross the divide between the versatility of digital audio and the tactile control of vinyl turntablism.
Final Scratch uses special vinyl records pressed with a digital timecode, which are then played on normal turntables. The timecode signal is interpreted by a computer, connected to the turntables through an interface called the ScratchAmp. The signal represents where the stylus is on the record, in which direction it is traveling, and at what speed. This information is interpreted by the computer and used to play back a digital audio file which has been 'mapped' to the turntable. In practical terms, this means that any audio file can be manipulated as though it was pressed on vinyl.
Features
Final Scratch offers the ability to play audio tracks unavailable on vinyl e.g. pre-arranged loops, unreleased music or rare tracks. Furthermore, it allows the use of CD deck features (software permitting) such as keylock, pitch shift, looping, instant cue locating and visual indicators of audio features such as loud or quiet parts, and the ability to prevent needle skips on the vinyl being reflected in the playback of the audio track being played/controlled (software permitting). However, it comes at the expense of reliability; depending on the hardware/software configuration used, vinyl emulation systems may use more system resources than some laptops or PCs offer, making them unsuitable for this use.
History
The original Final Scratch concept and prototypes were developed by the Dutch company N2IT V.O.F, by Mark-Jan Bastian, with help from Tim Hemel and Bill Squire.
It has passed through multiple stages of development. These stages are marked by involvement with different companies, hardware configurations, software developers, licensees and licensors, and operating systems.
Pre-release
Final Scratch was originally developed for BeOS.
Versions 1.0-1.5
All versions of Final Scratch 1 use the same Scratchamp, a USB and RCA device in a round aluminium shell. The technical specifications of this device have been closely guarded by Stanton as an anti-piracy measure, though some users, unsatisfied with the latency and instability of the system, have alleged the use of faulty Philips sound chips which had already been withdrawn from the market. However, the same chipset was being used in several other USB audio devices manufactured by companies like Griffin and Roland at that time.
FS 1.0 was released for PC only, on a specially modified distribution of Debian Linux. It was relatively primitive but some users found that, if configured correctly, it outperformed all subsequent versions of Final Scratch 1.x.
With version 1.1, Stanton Magnetics began working with Native Instruments on the software side of the product, which became Traktor Final Scratch. As the name suggests, this bore a resemblance to the interface of Traktor, a Native Instruments software DJing product. This version was once again available on Linux, but was also ported to Mac OS X.
The next major revision was version 1.5, which added a Windows XP version, but dropped Linux support. This version also added the ability to keep the pitch of the record constant whilst shifting the tempo. The interface changed very little, but some users initially had issues with the Windows Scratchamp drivers.
Support for the original Scratchamp has all but since disappeared and current owners, disappointed by the lack of support by Stanton, have had to rely on old versions of Traktor FS or Digiscratch.
Version 2
Version 2 marks the introduction of both a new Scratchamp hardware device and different software compatibility.
This new Scratchamp made 24-bit/96 kHz digital quality playback and record possible. Stanton added an ASIO driver, and MIDI capabilities. They also replaced the USB interface with FireWire which was intended to reduce playback latency. The new Scratchamp was developed by Alan Flum, Len Bryan, Mark DeMouy and Jim Mazur.
The version 2 Scratchamp is compatible with Native Instruments Traktor DJ Studio versions 2.6 and through 3.2.0.85 (Mac). NI has dropped support of SA2 in favor of their own vinyl system Traktor Scratch.
Final Scratch Open
In late 2005, Stanton and Native Instruments ended their working relationship. Stanton still markets the ScratchAmp hardware as part of Final Scratch Open, introduced in early 2007. Stanton claims that the ScratchAmp can now interact with any audio software through ASIO or WDM on Windows, and CoreAudio in Mac OS X. Although all Windows and Mac audio software is ostensibly compatible with Final Scratch Open, there is no dedicated software program for deejaying with the ScratchAmp hardware.
Internal workings
The internal workings of Final Scratch are quite simple to understand. Multiple open source software libraries have been created to decode the Final Scratch time code. The information here comes from those libraries.
A basic Final Scratch setup consists of five pieces of equipment.
A computer running a compatible software, usually Native Instrument's Traktor
The ScratchAmp
Two turntables or two CD decks made for DJing
Two time coded vinyl records or time coded CDs
An audio DJ mixer.
ScratchAmp
The ScratchAmp is a FireWire (FS 2, FS Open) or USB (FS 1) audio device. It has two phono/line stereo level inputs to read the timecode from the record or the CD, and two line level stereo outputs to feed into the audio DJ mixer line channels. It also has two phono stereo outputs for pass-through of the actual phono audio signal. This is useful for DJs who wish to play both digital audio tracks AND traditional vinyl; allowing them to switch between the two sources without disconnecting or re-connecting audio jacks in the middle of a DJ set.
The ScratchAmp does not store any audio on its own, it is simply a purpose built external Soundcard. It communicates with a PC—usually a laptop—over the FireWire or USB connection. The laptop uses Final Scratch compatible software (typically Traktor DJ Studio) to interpret the timecode signal from the supplied special vinyl/CD, then play back a digital audio file based on that signal, allowing traditional DJ vinyl control of MP3, WAV and Apple AAC audio files. The Laptop software then sends audio data back, over the same FireWire/USB connection to the scratch amp, which then sends an audio signal out through the line level output, for playing through a DJ Mixer or Amp.
Audio/data routing
A step by step series of events detailing how Final Scratch operates;
Timecoded audio signal pressed onto vinyl/CD picked up by vinyl/CD turntable
Signal routed into ScratchAmp via phono connection, then into the PC via USB or FireWire
DJ software decodes timecode signal and determines position, speed and direction the Vinyl/CD is being played or manipulated
DJ software plays the selected "mapped" digital audio file synchronous to the vinyl/CD playback
Digital audio file audio signal is sent to the Scratchamp phono connectors for connection to a DJ mixer or amp
Vinyl/CD time code
The most complex piece of the Final Scratch setup is the code pressed onto the vinyl. A 1200 hertz amplitude modulated sine wave is pressed into the left and right channels with a phase difference of 90 degrees. Each channel holds one of the two bit streams required for the time code. In one cycle of either wave form, two bits are stored: one on the positive voltage peak and one on the negative voltage valley. The relative amplitudes of these peaks represent either a binary one or zero. A relatively high amplitude on either peak represents a one, a relatively low amplitude represents a zero. In each channel is a separate bitstream, the left channel is not identical to the right (disregarding the phase difference).
The time codes themselves consist of 40 individual bits, or 20 cycles on each channel's waveform. On the right channel the bit sequence of 0, 0, 0, 1 represents the start sequence for a single time code. Those four bits along with the four corresponding bits on the left channel and the next 16 bits on each channel can be decoded as an integer position value which represents where the needle is on the record. The speed at which the record is spinning can be found by comparing the frequency of the waveform being read from the record to the true frequency of the wave form on the record at normal speed. This difference represents the change from the normal speed at which the record turns. The direction which the record is spinning at any given time can be found using the phase difference between the waves on the two channels. This procedure is the same as that used to determine the direction in which a ball mouse is moving. Because a single time code is made up of 40 consecutive bits, read errors can cause a timecode to be unreadable even if a single bit is misread. A bit that has become unreadable due to a scratch can make an entire 40 bit long time code permanently unreadable. Dust can have a similar effect on the time code. The time code implements very little error checking, an attribute strong in a number of other vinyl control systems.
See also
Vinyl emulation
References
External links
Native Instruments website
Stanton Website
Video by Mark-Jan Bastian, John Acquaviva demonstrating FinalScratch to Carl Cox @ RAI Amsterdam
Audio mixing software |
30776013 | https://en.wikipedia.org/wiki/Battle%20for%20the%20Ol%27%20School%20Bell | Battle for the Ol' School Bell | The Battle for the Ol' School Bell was an old rivalry between the Troy State Trojans (now the Troy Trojans) and the Jacksonville State Gamecocks when the two schools started playing together in Division II. The series continued as the Trojans moved to the FCS, with the Gamecocks moving to the FCS soon after. The series came to a halt when Troy moved to what is now the FBS. However, with rumors of Jax State considering a possible move to the FBS, the rivalry may be renewed again. The idea for a school bell trophy stemmed from the two schools' common origins as teachers' colleges.
History
The two teams first met in 1924 in Jacksonville, Alabama. The last game was played in 2001. Jacksonville State leads the series 32–29–2. Troy has won the last seven games of the series, while also going 12–3 since 1983 against the Gamecocks.
Game results
See also
List of NCAA college football rivalry games
References
College football rivalries in the United States
Jacksonville State Gamecocks football
Troy Trojans football
1924 establishments in Alabama |
1394971 | https://en.wikipedia.org/wiki/File%20%28command%29 | File (command) | The file command is a standard program of Unix and Unix-like operating systems for recognizing the type of data contained in a computer file.
History
The original version of file originated in Unix Research Version 4 in 1973. System V brought a major update with several important changes, most notably moving the file type information into an external text file rather than compiling it into the binary itself.
Most major BSD and Linux distributions use a free, open-source reimplementation which was written in 1986–87 by Ian Darwin from scratch. It was expanded by Geoff Collyer in 1989 and since then has had input from many others, including Guy Harris, Chris Lowth and Eric Fischer; from late 1993 onward its maintenance has been organized by Christos Zoulas. The OpenBSD system has its own subset implementation written from scratch, but still uses the Darwin/Zoulas collection of magic file formatted information.
The command has also been ported to the IBM i operating system.
Specification
The Single Unix Specification (SUS) specifies that a series of tests are performed on the file specified on the command line:
if the file cannot be read, or its Unix file type is undetermined, the file program will indicate that the file was processed but its type was undetermined.
file must be able to determine the types directory, FIFO, socket, block special file, and character special file
zero-length files are identified as such
an initial part of file is considered and file is to use position-sensitive tests
the entire file is considered and file is to use context-sensitive tests
the file is identified as a data file
file's position-sensitive tests are normally implemented by matching various locations within the file against a textual database of magic numbers (see the Usage section). This differs from other simpler methods such as file extensions and schemes like MIME.
In most implementations, the file command uses a database to drive the probing of the lead bytes. That database is implemented in a file called magic, whose location is usually in /etc/magic, /usr/share/file/magic or a similar location.
Usage
The SUS mandates the following options:
-M file, specify a file specially formatted containing position-sensitive tests; default position-sensitive tests and context-sensitive tests will not be performed.
-m file, as for -M, but default tests will be performed after the tests contained in file.
-d, perform default position-sensitive and context-sensitive tests to the given file; this is the default behaviour unless -M or -m is specified.
-h, do not dereference symbolic links that point to an existing file or directory.
-L, dereference the symbolic link that points to an existing file or directory.
-i, do not classify the file further than to identify it as either: nonexistent, a block special file, a character special file, a directory, a FIFO, a socket, a symbolic link, or a regular file. Linux and BSD systems behave differently with this option and instead output an Internet media type ("MIME type") identifying the recognized file format.
Other Unix and Unix-like operating systems may add extra options than these, such as -s 'special files', -k 'keep-going' or -r 'raw' (examples below).
The command tells only what the file looks like, not what it is (in the case where file looks at the content). It is easy to fool the program by putting a magic number into a file the content of which does not match it. Thus the command is not usable as a security tool other than in specific situations.
Examples
$ file file.c
file.c: C program text
$ file program
program: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked
(uses shared libs), stripped
$ file /dev/hda1
/dev/hda1: block special (0/0)
$ file -s /dev/hda1
/dev/hda1: Linux/i386 ext2 filesystem
Note that -s is a non-standard option available only on some platforms, which tells file to read device files and try to identify their contents rather than merely identifying them as device files. Normally file does not try to read device files since reading such a file can have undesirable side effects.
$ file -k -r libmagic-dev_5.35-4_armhf.deb # (on Linux)
libmagic-dev_5.35-4_armhf.deb: Debian binary package (format 2.0)
- current ar archive
- data
Through the non-standard option -k the program does not stop after the first hit found, but looks for other matching patterns.
The -r option, which is available in some versions, causes the unprintable new line character to be displayed in its raw form rather than in its octal representation.
$ file compressed.gz
compressed.gz: gzip compressed data, deflated, original filename, `compressed', last
modified: Thu Jan 26 14:08:23 2006, os: Unix
$ file -i compressed.gz # (on Linux)
compressed.gz: application/x-gzip; charset=binary
$ file data.ppm
data.ppm: Netpbm PPM "rawbits" image data
$ file /bin/cat
/bin/cat: Mach-O universal binary with 2 architectures
/bin/cat (for architecture ppc7400): Mach-O executable ppc
/bin/cat (for architecture i386): Mach-O executable i386
$ file /usr/bin/vi
/usr/bin/vi: symbolic link to vim
Identifying symbolic links is not available on all platforms and will be dereferenced if -L is passed or POSIXLY_CORRECT is set.
Libmagic library
As of version 4.00 of the Ian Darwin/Christos Zoulas version of file, the functionality of file is incorporated into a libmagic library that is accessible via C (and C-compatible) linking; file is implemented using that library.
References
External links
file mailing list
file releases
Manual pages
Other
Fine Free File Command – homepage for version of file used in major BSD and Linux distributions.
File for Windows – webpage of native GnuWin32 port of file for 32 bit Windows.
The libmagic-dev package on packages.debian.org
TrID, an alternative providing ranked answers (instead of just one) based on statistics.
Standard Unix programs
Unix SUS2008 utilities
Plan 9 commands |
61893262 | https://en.wikipedia.org/wiki/Ted%20Kaehler | Ted Kaehler | Ted Kaehler (born 1950) is an American computer scientist known for his role in the development of several system methods. He is most noted for his contributions to the programming languages Smalltalk, Squeak, and Apple Computer's HyperCard system, and other technologies developed at Xerox PARC.
Background
Kaehler is a son of a mechanical engineer and grew up tinkering with mechanical toys. During the 1960s, he built a computer on his own following an article published in Scientific American. He went to Gunn High School, a public school in Palo Alto, California. He graduated in 1968. While in high school, Kaehler was accepted to a summer job at then named Fairchild Industries. During this work, he learned the programming language Fortran. During his high school days, he was introduced to his first computer, an IBM 1620, operated by the Palo Alto Unified School District. Kaehler then attended Stanford University to study physics, studied programming under Donald Knuth, learned the language APL, and met Dan Ingalls. He graduated with a Bachelor of Science (B.S.) in physics in 1972. Later, Xerox began a pilot program with Gunn High School, loaning them a Xerox Alto.
Xerox PARC
Ingalls introduced Kaehler to PARC when he secured a contract with Xerox. They formed a team that included George White, who was already with the company working on speech recognition software. During his early years at PARC, he attended Carnegie Mellon University. He graduated with a Master of Science (MSc) in computer science in 1976. By the 1980s, he was reportedly demonstrating a virtual reality (VR) technology that involved a user in Maze War 3D game. This depiction successfully voiced a response in-world to another user in the real world. The development has been touted as the first avatar-centric reference to this kind of VR technology.
Kaehler was also documented as one of the researchers at PARC who briefed Steve Jobs about the company's three innovations: the graphical user interface (GUI) of the Xerox Alto computer, Smalltalk, and Ethernet network at PARC.
Smalltalk
Kaehler was part of a group led by Dr. Alan Kay who refined the concept of network computing through Smalltalk. This is a system that drew from John McCarthy's language LISP and from simulation programming language Simula, versions 1 and 67, which were developed by the Norwegian Computing Center. In Kay's account of Smalltalk's early development, he cited key milestones attributed to Kaehler. According to Kay, along with Ingalls, Dave Robson, and Diana Merry, for instance, Kaehler successfully implemented the Smalltalk-76 system from scratch within a period of seven months. It constituted 50 classes that composed 180 pages of source code. Kaehler was also credited for designing the virtual memory system named Object-Oriented Zoned Environment (OOZE). This system gave Smalltalk more speed, and the development of a system tracer used to clone Smalltalk-76 since the technology can write out new virtual memories from their prior iterations.
With Smalltalk, Kaehler worked closely with two future Turing Award winners. He began a lifelong professional association with Alan Kay, as described herein. Kaehler also co-authored a book, A Taste of Smalltalk, with University of California, Berkeley professor David Patterson, future leader of the RISC-V movement.
Apple
In March 1985, Kaehler moved to Apple as a researcher. He became involved in the development of Macintosh computers, primarily providing technical support. However, Kaehler was more noted for improving other technologies such as the company's HyperCard system from 1985 to 1987. This is a tool that allows users to create entertainment and instructional content. Kaehler added an interface that made it possible to control videodiscs.
In 1996, while at Apple, Kaehler received a US patent for co-inventing user interface intermittent on-demand (pop up) halos around objects, with buttons to manipulate that object.
Squeak
Kaehler also became part of the open-source software community-supported Squeak Central Team in 1996, which also included Ingalls, John Maloney, Scott Wallace, and Andreas Raab. It was initially developed out of the Smalltalk-80 at Apple Research Laboratory and was later continued at Walt Disney Imagineering. Squeak was developed as an open and highly-portable language that is written fully in Smalltalk and included the EToys system, which allows children to see the software operation. The use of Smalltalk technology allows Squeak to be easier to debug, analyze, and change. Kaehler was credited for writing the code of the platform's painting system, Squeak Paintbox, and other EToys pilot versions.
Personal life
In 1982, Kaehler wed Carol Nasby, who also worked at Apple for several years, wrote the first Macintosh Owners Guide, built the HyperCard Help System for version 1.0, and wrote the book HyperCard Power. In 1991, she died from complications of Type 1 diabetes.
In 1998, he wed his second wife Cynthia. She was a former preschool teacher for 25 years, and an artist who made fused glass pendants for necklaces and broaches. They lived in Las Vegas, Nevada and had three children. In 2020, she died from cancer.
See also
List of computer scientists
List of programmers
References
External links
Living people
People from Palo Alto, California
1950 births
American computer scientists
American computer programmers
Human–computer interaction researchers
Programming language designers
Scientists from California
20th-century American scientists
21st-century American scientists
Scientists at PARC (company)
Hewlett-Packard people
Apple Inc. employees
Open source advocates
Stanford University alumni
Carnegie Mellon University alumni
Gunn High School alumni |
40470509 | https://en.wikipedia.org/wiki/Severin%20Hacker | Severin Hacker | Severin Hacker (born 1984) is a Swiss computer scientist who is the co-founder and CTO of Duolingo, the world's most popular language-learning platform.
Biography
Hacker was born and raised in Zug and studied at ETH Zurich. In a 2020 interview, Hacker specified that gaming played a large role in his interest in computer science: "What originally drew me to computers was video games and the desire to build your own games and understand how those games are built. I was somewhat obsessed."
He moved to Pittsburgh to study at Carnegie Mellon University where he co-founded Duolingo with Luis von Ahn in 2009.
He received his BS in Computer Science from ETH Zurich in 2006 and his PhD in Computer Science from Carnegie Mellon University in 2014.
Founding of Duolingo
Initially, Hacker and his former graduate advisor, Luis von Ahn, wanted to develop an application that could translate internet sites, so that they would be accessible for non-English speakers. They felt that automated translation software wasn't as effective as using the skills and knowledge of bilingual speakers. During Hacker's doctoral studies, Duolingo became a by-product of this idea, or "happy mistake." Hacker's goal for Duolingo was to make it "100% free" so the most disadvantaged person with an internet connection would still have access to it.
Duolingo
Hacker and his team of PhD students used machine learning to personalize Duolingo to each user. Specifically, they wanted to predict what language concepts the user was on the verge of forgetting. In 2012, a study by American universities showed that spending 34 hours of learning on Duolingo was equivalent to a full-semester of a college language course. In 2015, Hacker and von Ahn started selling translations, such as to the Spanish tech news group of CNN.
Retention Philosophy
There are two parts to Hacker's "Retention Philosophy": learning should be fun and motivation should remain high. Through Duolingo, Hacker wants users to have the option to increase their 'stay-tuned quota' which involves adjusting the learning time and difficulty of the course. Another idea derived from Hacker's philosophy was to apply gamification to Duolingo. This was to apply game elements and principles instead of classroom learning tools to the course.
Awards and honors
In 2014, Hacker received the Crunchie Award for Best Startup.
In 2014, Hacker was included in the MIT Technology Review's "Top Innovators under 35."
In 2016, Hacker and Luis von Ahn received the Tech 50 award.
In 2019, Hacker received One Young World's Entrepreneur of the Year Award.
External business ventures and investments
IAM Robotics, a robotics company focused on autonomous fulfillment.
ViaHero, a trip planning service creating personalized itineraries.
Brainbase, a platform that helps companies manage and monetize their intellectual property.
Gridwise, an app that provides information for driver demand throughout a city.
Abililife, a company developing technologies to assist Parkinson's patients.
References
External links
Severin Hacker's official website
Swiss computer scientists
ETH Zurich alumni
Duolingo
Carnegie Mellon University alumni
Chief technology officers
Date of birth missing (living people)
People from Zug
Living people
1984 births |
37535513 | https://en.wikipedia.org/wiki/Incremental%20encoder | Incremental encoder | An incremental encoder is a linear or rotary electromechanical device that has two output signals, A and B, which issue pulses when the device is moved. Together, the A and B signals indicate both the occurrence of and direction of movement. Many incremental encoders have an additional output signal, typically designated index or Z, which indicates the encoder is located at a particular reference position. Also, some encoders provide a status output (typically designated alarm) that indicates internal fault conditions such as a bearing failure or sensor malfunction.
Unlike an absolute encoder, an incremental encoder does not indicate absolute position; it only reports changes in position and, for each reported position change, the direction of movement. Consequently, to determine absolute position at any particular moment, it is necessary to send the encoder signals to an incremental encoder interface, which in turn will "track" and report the encoder's absolute position.
Incremental encoders report position changes nearly instantaneously, which allows them to monitor the movements of high speed mechanisms in near real-time. Because of this, incremental encoders are commonly used in applications that require precise measurement and control of position and velocity.
Quadrature outputs
An incremental encoder employs a quadrature encoder to generate its A and B output signals. The pulses emitted from the A and B outputs are quadrature-encoded, meaning that when the incremental encoder is moving at a constant velocity, the A and B waveforms are square waves and there is a 90 degree phase difference between A and B.
At any particular time, the phase difference between the A and B signals will be positive or negative depending on the encoder's direction of movement. In the case of a rotary encoder, the phase difference is +90° for clockwise rotation and −90° for counter-clockwise rotation, or vice versa, depending on the device design.
The frequency of the pulses on the A or B output is directly proportional to the encoder's velocity (rate of position change); higher frequencies indicate rapid movement, whereas lower frequencies indicate slower speeds. Static, unchanging signals are output on A and B when the encoder is motionless. In the case of a rotary encoder, the frequency indicates the speed of the encoder's shaft rotation, and in linear encoders the frequency indicates the speed of linear traversal.
Conceptual drawings of quadrature encoder sensing mechanisms
Resolution
The resolution of an incremental encoder is a measure of the precision of the position information it produces. Encoder resolution is typically specified in terms of the number of A (or B) pulses per unit displacement or, equivalently, the number of A (or B) square wave cycles per unit displacement. In the case of rotary encoders, resolution is specified as the number of pulses per revolution (PPR) or cycles per revolution (CPR), whereas linear encoder resolution is typically specified as the number of pulses issued for a particular linear traversal distance (e.g., 1000 pulses per mm).
This is in contrast to the measurement resolution of the encoder, which is the smallest position change that the encoder can detect. Every signal edge on A or B indicates a detected position change. Since each square-wave cycle on A (or B) encompasses four signal edges (rising A, rising B, falling A and falling B), the encoder's measurement resolution equals one-fourth of the displacement represented by a full A or B output cycle. For example, a 1000 pulse-per-mm linear encoder has a per-cycle measurement resolution of 1 mm / 1000 cycles = 1 μm, so this encoder's resolution is 1 μm / 4 = 250 nm.
Symmetry and phase
When moving at constant velocity, an ideal incremental encoder would output perfect square waves on A and B (i.e., the pulses would be exactly 180° wide) with a phase difference of exactly 90° between A and B. In real encoders, however, due to sensor imperfections, the pulse widths are never exactly 180° and the phase difference is never exactly 90°. Furthermore, the A and B pulse widths vary from one cycle to another (and from each other) and the phase difference varies at every A and B signal edge. Consequently, both the pulse width and phase difference will vary over a range of values.
For any particular encoder, the pulse width and phase difference ranges are defined by "symmetry" and "phase" (or "phasing") specifications, respectively. For example, in the case of an encoder with symmetry specified as 180° ±25°, the width of every output pulse is guaranteed to be at least 155° and no more than 205°. Similarly, with phase specified as 90° ±20°, the phase difference at every A or B edge will be at least 70° and no more than 110°.
Signal types
Incremental encoders employ various types of electronic circuits to drive (transmit) their output signals, and manufacturers often have the ability to build a particular encoder model with any of several driver types. Commonly available driver types include open collector, mechanical, push-pull and differential RS-422.
Open collector
Open collector drivers operate over a wide range of signal voltages and often can sink significant output current, making them useful for directly driving current loops, opto-isolators and fiber optic transmitters.
Because it cannot source current, the output of an open-collector driver must be connected to a positive DC voltage through a pull-up resistor. Some encoders provide an internal resistor for this purpose; others do not and thus require an external pull-up resistor. In the latter case, the resistor typically is located near the encoder interface to improve noise immunity.
The encoder's high-level logic signal voltage is determined by the voltage applied to the pull-up resistor (VOH in the schematic), whereas the low-level output current is determined by both the signal voltage and load resistance (including pull-up resistor). When the driver switches from the low to the high logic level, the load resistance and circuit capacitance act together to form a low-pass filter, which stretches (increases) the signal's rise time and thus limits its maximum frequency. For this reason, open collector drivers typically are not used when the encoder will output high frequencies.
Mechanical
Mechanical (or contact) incremental encoders use sliding electrical contacts to directly generate the A and B output signals. Typically, the contacts are electrically connected to signal ground when closed so that the outputs will be "driven" low, effectively making them mechanical equivalents of open collector drivers and therefore subject to the same signal conditioning requirements (i.e. external pull-up resistor).
The maximum output frequency is limited by the same factors that affect open-collector outputs, and further limited by contact bounce – which must be filtered by the encoder interface – and by the operating speed of the mechanical contacts, thus making these devices impractical for high frequency operation. Furthermore, the contacts experience mechanical wear under normal operation, which limits the life of these devices. On the other hand, mechanical encoders are relatively inexpensive because they have no internal, active electronics. Taken together, these attributes make mechanical encoders a good fit for low duty, low frequency applications.
PCB- and panel-mounted mechanical incremental encoders are widely used as hand-operated controls in electronic equipment. Such devices are used as volume controls in audio equipment, as voltage controls in bench power supplies, and for a variety of other functions.
Push-pull
Push-pull outputs (e.g., TTL) typically are used for direct interface to logic circuitry. These are well-suited to applications in which the encoder and interface are located near each other (e.g., interconnected via printed circuit conductors or short, shielded cable runs) and powered from a common power supply, thus avoiding exposure to electric fields, ground loops and transmission line effects that might corrupt the signals and thereby disrupt position tracking, or worse, damage the encoder interface.
Differential pair
Differential RS-422 signaling is typically preferred when the encoder will output high frequencies or be located far away from the encoder interface, or when the encoder signals may be subjected to electric fields or common-mode voltages, or when the interface must be able to detect connectivity problems between encoder and interface. Examples of this include CMMs and CNC machinery, industrial robotics, factory automation, and motion platforms used in aircraft and spacecraft simulators.
When RS-422 outputs are employed, the encoder provides a differential conductor pair for every logic output; for example, "A" and "/A" are commonly-used designations for the active-high and active-low differential pair comprising the encoder's A logic output. Consequently, the encoder interface must provide RS-422 line receivers to convert the incoming RS-422 pairs to single-ended logic.
Principal applications
Position tracking
Incremental encoders are commonly used to monitor the physical positions of mechanical devices. The incremental encoder is mechanically attached to the device to be monitored so that its output signals will change as the device moves. Example devices include the balls in mechanical computer mice and trackballs, control knobs in electronic equipment, and rotating shafts in radar antennas.
An incremental encoder does not keep track of, nor do its outputs indicate the current encoder position; it only reports incremental changes in position. Consequently, to determine the encoder's position at any particular moment, it is necessary to provide external electronics which will "track" the position. This external circuitry, which is known as an incremental encoder interface, tracks position by counting incremental position changes.
As it receives each report of incremental position change (indicated by a transition of the A or B signal), an encoder interface will take into account the phase relationship between A and B and, depending on the sign of the phase difference, count up or down. The cumulative "counts" value indicates the distance traveled since tracking began. This mechanism ensures accurate position tracking in bidirectional applications and, in unidirectional applications, prevents false counts that would otherwise result from vibration or mechanical dithering near an AB code transition.
Displacement units
Often the encoder counts must be expressed in units such as meters, miles or revolutions. In such cases, the counts are converted to the desired units by multiplying by the ratio of encoder displacement per count :
Typically this calculation is performed by a computer which reads the counts from the incremental encoder interface. For example, in the case of a linear incremental encoder that produces 8000 counts per millimeter of travel, the position in millimeters is calculated as follows:
Homing
In order for an incremental encoder interface to track and report absolute position, the encoder counts must be correlated to a reference position in the mechanical system to which the encoder is attached. This is commonly done by homing the system, which consists of moving the mechanical system (and encoder) until it aligns with a reference position, and then jamming the associated absolute position counts into the encoder interface's counter.
A proximity sensor is built into some mechanical systems to facilitate homing, which outputs a signal when the mechanical system is in its "home" (reference) position. In such cases, the mechanical system is homed by moving it until the encoder interface receives the sensor signal, whereupon the corresponding position value is jammed into the position counter.
In some rotating mechanical systems (e.g. rotating radar antennas), the "position" of interest is the rotational angle relative to a reference orientation. These typically employ a rotary incremental encoder that has an index (or Z) output signal. The index signal is asserted when the shaft is in its reference orientation, which causes the encoder interface to jam the reference angle into its position counter.
Some incremental encoder applications lack reference position detectors and therefore must implement homing by other means. For example a computer, when using a mouse or trackball pointing device, typically will home the device by assuming a central, initial screen position upon booting, and jam the corresponding counts into the X and Y position counters. In the case of panel encoders used as hand-operated controls (e.g., audio volume control), the initial position typically is retrieved from flash or other non-volatile memory upon power-up and jammed into the position counter, and upon power-down the current position count is saved to non-volatile memory to serve as the initial position for the next power-up.
Speed measurement
Incremental encoders are commonly used to measure the speed of mechanical systems. This may be done for monitoring purposes or to provide feedback for motion control, or both. Widespread applications of this include speed control of radar antenna rotation and material conveyors, and motion control in robotics, CMM and CNC machines.
Incremental encoder interfaces are primarily concerned with tracking mechanical displacement and usually do not directly measure speed. Consequently, speed must be indirectly measured by taking the derivative of the position with respect to time. The position signal is inherently quantized, which poses challenges for taking the derivative due to quantization error, especially at low speeds.
Encoder speed can be determined either by counting or by timing the encoder output pulses (or edges). The resulting value indicates a frequency or period, respectively, from which speed can be calculated. The speed is proportional to frequency, and inversely proportional to period.
By frequency
If the position signal is sampled (a discrete time signal), the pulses (or pulse edges) are detected and counted by the interface, and speed is typically calculated by a computer which has read access to the interface. To do this, the computer reads the position counts from the interface at time and then, at some later time reads the counts again to obtain . The average speed during the interval to is then calculated:
The resulting speed value is expressed as counts per unit time (e.g., counts per second). In practice, however, it is often necessary to express the speed in standardized units such as meters per second, revolutions per minute (RPM), or miles per hour (MPH). In such cases, the software will take into account the relationship between counts and desired distance units, as well as the ratio of the sampling period to desired time units. For example, in the case of a rotary incremental encoder that produces 4096 counts per revolution, which is being read once per second, the software would compute RPM as follows:
When measuring speed this way, the measurement resolution is proportional to both the encoder resolution and the sampling period (the elapsed time between the two samples); measurement resolution will become higher as the sampling period increases.
By period
Alternatively, a speed measurement can be reported at each encoder output pulse by measuring the pulse width or period. When this method is used, measurements are triggered at specific positions instead of at specific times. The speed calculation is the same as shown above (counts / time), although in this case the measurement start and stop times ( and ) are provided by a time reference.
This technique avoids position quantization error but introduces errors related to quantization of the time reference. Also, it is more sensitive to sensor non-idealities such as phase errors, symmetry errors, and variations in the transition locations from their nominal values.
Incremental encoder interface
An incremental encoder interface is an electronic circuit that receives signals from an incremental encoder, processes the signals to produce absolute position and other information, and makes the resulting information available to external circuitry.
Incremental encoder interfaces are implemented in a variety of ways, including as ASICs, as IP blocks within FPGAs, as dedicated peripheral interfaces in microcontrollers and, when high count rates are not required, as polled (software monitored) GPIOs.
Regardless of the implementation, the interface must sample the encoder's A and B output signals frequently enough to detect every AB state change before the next state change occurs. Upon detecting a state change, it will increment or decrement the position counts based on whether A leads or trails B. This is typically done by storing a copy of the previous AB state and, upon state change, using the current and previous AB states to determine movement direction.
Line receivers
Incremental encoder interfaces use various types of electronic circuits to receive encoder-generated signals. These line receivers serve as buffers to protect downstream interface circuitry and, in many cases, also provide signal conditioning functions.
Single-ended
Incremental encoder interfaces typically employ Schmitt trigger inputs to receive signals from encoders that have single-ended (e.g., push-pull, open collector) outputs. This type of line receiver inherently rejects low-level noise (by means of its input hysteresis) and protects downstream circuitry from invalid (and possibly destructive) logic signal levels.
Differential
RS-422 line receivers are commonly used to receive signals from encoders that have differential outputs. This type of receiver rejects common-mode noise and converts the incoming differential signals to the single-ended form required by downstream logic circuits.
In mission-critical systems, an encoder interface may be required to detect loss of input signals due to encoder power loss, signal driver failure, cable fault or cable disconnect. This is usually accomplished by using enhanced RS-422 line receivers which detect the absence of valid input signals and report this condition via a "signal lost" status output. In normal operation, glitches (brief pulses) may appear on the status outputs during input state transitions; typically, the encoder interface will filter the status signals to prevent these glitches from being erroneously interpreted as lost signals. Depending on the interface, subsequent processing may include generating an interrupt request upon detecting signal loss, and sending notification to the application for error logging or failure analysis.
Clock synchronization
An incremental encoder interface largely consists of sequential logic which is paced by a clock signal. However, the incoming encoder signals are asynchronous with respect to the interface clock because their timing is determined solely by encoder movement. Consequently, the output signals from the A and B (also Z and alarm, if used) line receivers must be synchronized to the interface clock, both to avoid errors due to metastability and to coerce the signals into the clock domain of the quadrature decoder.
Typically this synchronization is performed by independent, single-signal synchronizers such as the two flip-flop synchronizer seen here. At very high clock frequencies, or when a very low error rate is needed, the synchronizers may include additional flip-flops in order to achieve an acceptably low bit error rate.
Input filter
In many cases an encoder interface must filter the synchronized encoder signals before further processing them. This may be required in order to reject low-level noise and brief, large-amplitude noise spikes commonly found in motor applications and, in the case of mechanical-type encoders, to debounce A and B to avoid count errors due to mechanical contact bounce.
Hardware-based interfaces often provide programmable filters for the encoder signals, which provide a wide range of filter settings and thus allow them to debounce contacts or suppress transients resulting from noise or slowly slewing signals, as needed. In software-based interfaces, A and B typically are connected to GPIOs that are sampled (via polling or edge interrupts) and debounced by software.
Quadrature decoder
Incremental encoder interfaces commonly use a quadrature decoder to convert the A and B signals into the direction and count enable (clock enable) signals needed for controlling a bidirectional (up- and down-counting) synchronous counter.
Typically, a quadrature decoder is implemented as a finite-state machine (FSM) which simultaneously samples the A and B signals and thus produces amalgamate "AB" samples. As each new AB sample is acquired, the FSM will store the previous AB sample for later analysis. The FSM evaluates the differences between the new and previous AB states and generates direction and count enable signals as appropriate for the detected AB state sequence.
State transitions
In any two consecutive AB samples, the logic level of A or B may change or both levels may remain unchanged, but in normal operation A and B will never both change. In this regard, each AB sample is effectively a two-bit Gray code.
Normal transitions
When only A or B changes state, it is assumed that the encoder has moved one increment of its measurement resolution and, accordingly, the quadrature decoder will assert its count enable output to allow the counts to change. Depending on the encoder's direction of travel (forward or reverse), the decoder will assert or negate its direction output to cause the counts to increment or decrement (or vice versa).
When neither A nor B changes, it is assumed that the encoder has not moved and so the quadrature decoder negates its count enable output, thereby causing the counts to remain unchanged.
Errors
If both the A and B logic states change in consecutive AB samples, the quadrature decoder has no way of determining how many increments, or in what direction the encoder has moved. This can happen if the encoder speed is too fast for the decoder to process (i.e., the rate of AB state changes exceeds the quadrature decoder's sampling rate; see Nyquist rate) or if the A or B signal is noisy.
In many encoder applications this is a catastrophic event because the counter no longer provides an accurate indication of encoder position. Consequently, quadrature decoders often will output an additional error signal which is asserted when the A and B states change simultaneously. Due to the severity and time-sensitive nature of this condition, the error signal is often connected to an interrupt request.
Clock multiplier
A quadrature decoder does not necessarily allow the counts to change for every incremental position change. When a decoder detects an incremental position change (due to a transition of A or B, but not both), it may allow the counts to change or it may inhibit counting, depending on the AB state transition and the decoder's clock multiplier.
The clock multiplier of a quadrature decoder is so named because it results in a count rate which is a multiple of the A or B pulse frequency. Depending on the decoder's design, the clock multiplier may be hardwired into the design or it may be run-time configurable via input signals.
The clock multiplier value may be one, two or four (typically designated "x1", "x2" and "x4", or "1x", "2x" and "4x"). In the case of a x4 multiplier, the counts will change for every AB state change, thereby resulting in a count rate equal to four times the A or B frequency. The x2 and x1 multipliers allow the counts to change on some, but not all AB state changes, as shown in the quadrature decoder state table above (note: this table shows one of several possible implementations for x2 and x1 multipliers; other implementations may enable counting at different AB transitions).
Position reporting
From an application's perspective, the fundamental purpose of an incremental encoder interface is to report position information on demand. Depending on the application, this may be as simple as allowing the computer to read the position counter at any time under program control. In more complex systems, the position counter may be sampled and processed by intermediate state machines, which in turn make the samples available to the computer.
Sample register
An encoder interface typically employs a sample register to facilitate position reporting. In the simple case where the computer demands position information under program control, the interface will sample the position counter (i.e., copy the current position counts to the sample register) and then the computer will read the counts from the sample register. This mechanism results in atomic operation and thus ensures the integrity of the sample data, which might otherwise be at risk (e.g., if the sample's word size exceeds the computer's word size).
Triggered sampling
In some cases the computer may not be able to programatically (via programmed I/O) acquire position information with adequate timing precision. For example, the computer may be unable to demand samples on a timely periodic schedule (e.g., for speed measurement) due to software timing variability. Also, in some applications it is necessary to demand samples upon the occurrence of external events, and the computer may be unable to do so in a timely manner. At higher encoder speeds and resolutions, position measurement errors can occur even when interrupts are used to demand samples, because the encoder may move between the time the IRQ is signaled and the sample demand is issued by the interrupt handler.
To overcome this limitation, it is common for an incremental encoder interface to implement hardware-triggered sampling, which enables it to sample the position counter at precisely-controlled times as dictated by a trigger input signal. This is important when the position must be sampled at particular times or in response to physical events, and essential in applications such as multi-axis motion control and CMM, in which the position counters of multiple encoder interfaces (one per axis) must be simultaneously sampled.
In many applications the computer must know precisely when each sample was acquired and, if the interface has multiple trigger inputs, which signal triggered the sample acquisition. To satisfy these requirements, the interface typically will include a timestamp and trigger information in every sample.
Event notification
Sampling triggers are often asynchronous with respect to software execution. Consequently, when the position counter is sampled in response to a trigger signal, the computer must be notified (typically via interrupt) that a sample is available. This allows the software to be event-driven (vs. polled), which facilitates responsive system behavior and eliminates polling overhead.
Sample FIFO
Consecutive sampling triggers may occur faster than the computer can process the resulting samples. When this happens, the information in the sample register will be overwritten before it can be read by the computer, resulting in data loss. To avoid this problem, some incremental encoder interfaces provide a FIFO buffer for samples. As each sample is acquired, it is stored in the FIFO. When the computer demands a sample, it is allowed to read the oldest sample in the FIFO.
Notes
References
External links
Position sensors
Speed sensors |