id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
8236343 | https://en.wikipedia.org/wiki/AboutUs.com | AboutUs.com | AboutUs.com is a wiki Internet domain directory. It lists websites along with information about their content. As a wiki, AboutUs allows Internet users to add entries or modify information. AboutUs.com has since become a wiki for more than just websites. The site now allows pages to be created for people, places, and almost anything else.
Ray King, Jay Westerdal, and Paul Stahura founded AboutUs in 2005. Later in 2006 a small staff of five people in Portland, Oregon, United States developed out the site. The staff expanded to more than thirty people and two continents with an office in Lahore, Pakistan. From May 2007 to early 2011, Ward Cunningham, developer of the first wiki, was the chief technology officer of AboutUs.
AboutUs attracted at least 1.4 million U.S. visitors in July 2008. They used to use the domain name "AboutUs.org", but moved to new site under "AboutUs.com" in May 2010. Traffic and revenue started declining sharply and in 2013 AboutUs.com and its assets were sold to Omkarasana LLC, a Colorado Limited Liability Company located in Denver. The new company has since redesigned the website, migrated its infrastructure over to Amazon Web Services, and increased visitors and revenue. There are now approximately four individuals that actively operate the site.
Website contents
There were more than ten million entries on AboutUs.com as of March 2008, and new pages are added at a rate of about 25,000 a day. Most entries were created by a web robot (bot); web searches by users direct the bot to create a page for a web domain. In many cases the content is simply a republication of the contents of the "About us", "About me", or similar page on the website. Such pages typically describe the entity that owns the site, and may include self-promotional information, which AboutUs.com does not restrict. In many other cases the content of an entry consists of the whois data for the website. As of February 2014, there were more than 20 million entries.
Data use
Some websites that analyze domains and traffic link to AboutUs.com as a point of additional reference. Notably, the "Whois" site DomainTools now references the Aboutus.com listing along with standard server information, domain ownership and history, and additional data. AboutUs has also referenced whois data content from the registrars Network Solutions and Register.com.
Malware domains
To prevent users from visiting malware domains, editors work with McAfee SiteAdvisor and PHSDL to identify such domains. Once a malware domain is identified, an AboutUs MalwareSite template is placed to warn users.
Popularity
AboutUs.com is an open directory, and does not restrict website owners from writing in their own comments, or referencing commercial information that is not necessarily notable; webmasters are permitted to engage in self-promotion.
AboutUs.com fills the gap for webmasters by encouraging the posting of information about all websites regardless of notability or commercial interest.
Because of the open editing policy, and inclusion of commercial sites, it has grown rapidly in its content. It is also popular among webmasters or website owners seeking to include their "about us" information on AboutUs.com, as a central point of summary information on the web.
Funding
In November 2006, AboutUs closed its initial financing round for one million dollars. In January 2009, AboutUs secured a $5 million funding round from Voyager Capital.
See also
DMOZ
Alexa Internet
List of companies based in Oregon
References
External links
AboutUs.com
Marshall Kilpatrick, "AboutUs: A Wiki About Every Website," TechCrunch, 11/14/2006.
MediaWiki websites
Web directories
Companies based in Portland, Oregon
Internet properties established in 2006
2006 establishments in Oregon |
322596 | https://en.wikipedia.org/wiki/GO%20Corp. | GO Corp. | GO Corporation was founded in 1987 to create portable computers, an operating system, and software with a pen-based user interface. It was famous not only for its pioneering work in Pen-based computing but as well as being one of the most well-funded start-up companies of its time.
Its founders were Jerry Kaplan, Robert Carr, and Kevin Doren. Mr. Kaplan subsequently chronicled the history of the company in his book Startup: A Silicon Valley Adventure. Omid Kordestani, former Senior VP of Global Business at Google, began his startup career with GO Corporation. Other notable GO alumni include CEO Bill Campbell (who later became chairman of Intuit), VP Sales Stratton Sclavos (took VeriSign public as its CEO), CFO and VP of Business Operations Randy Komisar (became CEO of LucasArts), and VP Marketing Mike Homer (was VP Marketing at time of Netscape's IPO in 1995).
History
Though the company enjoyed high levels of public awareness and generally positive attention from industry press, it ran into fierce competition, first from Microsoft (whose Pen Services for Windows were later the subject of an FTC investigation and patent violation suits by GO), and later from Apple's Newton project, and others. The company lined up software development partners but struggled to deliver hardware and software on their intended schedule. In 1991, they spun off their hardware unit under the name EO Inc., and in 1993 EO was acquired by AT&T Corporation, who hoped that its devices would showcase their AT&T Hobbit microprocessors. This sale raised much-needed cash but introduced new problems, as EO then ceased to coordinate well with GO's management, even considering adopting competing operating systems. Facing a cash crisis, GO agreed to sell itself to AT&T as well, bringing the two halves of the company back under one roof as of January 1994.
GO's PenPoint OS ran on AT&T's EO Personal Communicator and computers from IBM and others, but despite some success in vertical markets, consumers in the 1990s did not adopt tablet computing as enthusiastically as GO management had expected. (GO produced a 286-based lightweight "Go Computer" specifically for developers and evaluators; the company emphasis was that end users would run PenPoint OS on third-party hardware.) In January 1994, only two weeks after acquiring GO, AT&T decided to cancel the Hobbit product line, leaving it no reason to continue to support EO or GO. They had by then ceased to develop for other chips, and sales on the other platforms were small anyway. Co-founder Jerry Kaplan says that in its lifetime, the company generated "no meaningful sales". The loss of AT&T's support left GO with little chance of future revenue and, after burning through $75 million of venture funding, the company closed in July 1994.
Lawsuits
On 29 June 2005, Kaplan filed an antitrust lawsuit against Microsoft, alleging that Microsoft technicians had stolen technology from GO that had been shown to them under a non-disclosure agreement.
In a separate legal matter, in April 2008 certain features of the Microsoft's Windows/Tablet PC operating system and hardware were found to infringe on a patent by GO Corporation concerning user interfaces for pen computers.
See also
Apple Newton
PenPoint OS
Pen computing
History of tablet computers
Notes
References
- Contains two chapters dealing with the GO story from a view inside Microsoft.
External links
IDEO - Company that helped develop the EO Personal Communicator, based on the PenPoint operating system.
Annotated bibliography of references to handwriting recognition, gestures and pen computing
Notes on the History of Pen-based Computing (YouTube)
American companies established in 1987
American companies disestablished in 1994
Computer companies established in 1987
Computer companies disestablished in 1994
Defunct computer companies of the United States
Defunct software companies |
17765521 | https://en.wikipedia.org/wiki/Software%20development%20effort%20estimation | Software development effort estimation | In software development, effort estimation is the process of predicting the most realistic amount of effort (expressed in terms of person-hours or money) required to develop or maintain software based on incomplete, uncertain and noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets, investment analyses, pricing processes and bidding rounds.
State-of-practice
Published surveys on estimation practice suggest that expert estimation is the dominant strategy when estimating software development effort.
Typically, effort estimates are over-optimistic and there is a strong over-confidence in their accuracy. The mean effort overrun seems to be about 30% and not decreasing over time. For a review of effort estimation error surveys, see. However, the measurement of estimation error is problematic, see Assessing the accuracy of estimates.
The strong overconfidence in the accuracy of the effort estimates is illustrated by the finding that, on average, if a software professional is 90% confident or “almost sure” to include the actual effort in a minimum-maximum interval, the observed frequency of including the actual effort is only 60-70%.
Currently the term “effort estimate” is used to denote as different concepts such as most likely use of effort (modal value), the effort that corresponds to a probability of 50% of not exceeding (median), the planned effort, the budgeted effort or the effort used to propose a bid or price to the client. This is believed to be unfortunate, because communication problems may occur and because the concepts serve different goals.
History
Software researchers and practitioners have been addressing the problems of effort estimation for software development projects since at least the 1960s; see, e.g., work by Farr and Nelson.
Most of the research has focused on the construction of formal software effort estimation models. The early models were typically based on regression analysis or mathematically derived from theories from other domains. Since then a high number of model building approaches have been evaluated, such as approaches founded on case-based reasoning, classification and regression trees, simulation, neural networks, Bayesian statistics, lexical analysis of requirement specifications, genetic programming, linear programming, economic production models, soft computing, fuzzy logic modeling, statistical bootstrapping, and combinations of two or more of these models. The perhaps most common estimation methods today are the parametric estimation models COCOMO, SEER-SEM and SLIM. They have their basis in estimation research conducted in the 1970s and 1980s and are since then updated with new calibration data, with the last major release being COCOMO II in the year 2000. The estimation approaches based on functionality-based size measures, e.g., function points, is also based on research conducted in the 1970s and 1980s, but are re-calibrated with modified size measures and different counting approaches, such as the use case points or object points in the 1990s.
Estimation approaches
There are many ways of categorizing estimation approaches, see for example. The top level categories are the following:
Expert estimation: The quantification step, i.e., the step where the estimate is produced based on judgmental processes.
Formal estimation model: The quantification step is based on mechanical processes, e.g., the use of a formula derived from historical data.
Combination-based estimation: The quantification step is based on a judgmental and mechanical combination of estimates from different sources.
Below are examples of estimation approaches within each category.
Selection of estimation approaches
The evidence on differences in estimation accuracy of different estimation approaches and models suggest that there is no “best approach” and that the relative accuracy of one approach or model in comparison to another depends strongly on the context
. This implies that different organizations benefit from different estimation approaches. Findings that may support the selection of estimation approach based on the expected accuracy of an approach include:
Expert estimation is on average at least as accurate as model-based effort estimation. In particular, situations with unstable relationships and information of high importance not included in the model may suggest use of expert estimation. This assumes, of course, that experts with relevant experience are available.
Formal estimation models not tailored to a particular organization’s own context, may be very inaccurate. Use of own historical data is consequently crucial if one cannot be sure that the estimation model’s core relationships (e.g., formula parameters) are based on similar project contexts.
Formal estimation models may be particularly useful in situations where the model is tailored to the organization’s context (either through use of own historical data or that the model is derived from similar projects and contexts), and it is likely that the experts’ estimates will be subject to a strong degree of wishful thinking.
The most robust finding, in many forecasting domains, is that combination of estimates from independent sources, preferable applying different approaches, will on average improve the estimation accuracy.
It is important to be aware of the limitations of each traditional approach to measuring software development productivity.
In addition, other factors such as ease of understanding and communicating the results of an approach, ease of use of an approach, and cost of introduction of an approach should be considered in a selection process.
Assessing the accuracy of estimates
The most common measure of the average estimation accuracy is the MMRE (Mean Magnitude of Relative Error), where the MRE of each estimate is defined as:
This measure has been criticized
and there are several alternative measures, such as more symmetric measures, Weighted Mean of Quartiles of relative errors (WMQ)
and Mean Variation from Estimate (MVFE).
MRE is not reliable if the individual items are skewed. PRED(25) is preferred as a measure of estimation accuracy. PRED(25) measures the percentage of predicted values that are within 25 percent of the actual value.
A high estimation error cannot automatically be interpreted as an indicator of low estimation ability. Alternative, competing or complementing, reasons include low cost control of project, high complexity of development work, and more delivered functionality than originally estimated. A framework for improved use and interpretation of estimation error measurement is included in.
Psychological issues
There are many psychological factors potentially explaining the strong tendency towards over-optimistic effort estimates that need to be dealt with to increase accuracy of effort estimates. These factors are essential even when using formal estimation models, because much of the input to these models is judgment-based. Factors that have been demonstrated to be important are: Wishful thinking, anchoring, planning fallacy and cognitive dissonance. A discussion on these and other factors can be found in work by Jørgensen and Grimstad.
It's easy to estimate what you know.
It's hard to estimate what you know you don't know. (known unknowns)
It's very hard to estimate things that you don't know you don't know. (unknown unknowns)
Humor
The chronic underestimation of development effort has led to the coinage and popularity of numerous humorous adages, such as ironically referring to a task as a "small matter of programming" (when much effort is likely required), and citing laws about underestimation:
Ninety-ninety rule:
Hofstadter's law:
Fred Brooks' law:
Adding to the fact that estimating development efforts is hard, it's worth stating that assigning more resources doesn't always help.
Comparison of development estimation software
See also
References
Software project management
Development estimation software
Software engineering costs |
23458743 | https://en.wikipedia.org/wiki/CompactRIO | CompactRIO | CompactRIO (or cRIO) is a real-time embedded industrial controller made by National Instruments for industrial control systems. The CompactRIO is a combination of a real-time controller, reconfigurable IO Modules (RIO), FPGA module and an Ethernet expansion chassis.
Hardware
The CompactRIO system is a combination of a real-time controller chassis, reconfigurable IO Modules (RIO), an FPGA module and an Ethernet expansion chassis. Third-party modules are also available, and are generally compatible with NI-produced chassis controllers.
CompactRIO real-time controllers include a microprocessor for implementing control algorithms, and support a wide range of clock frequencies. Controllers are only compatible with National Instruments C Series I/O Modules. I/O modules are hot swappable (can be connected/disconnected while the unit is powered up).
The FPGA Module may be used to implement high-performance data processing on reconfigurable fabric. Such data processing may be performed on data streaming in from connected I/O Modules. The module is powered by a Xilinx Virtex high-performance FPGA. The FPGA can be programmed separately and is connected to the real-time controller using an internal PCI bus.
The Ethernet chassis includes an Ethernet port (8P8C), which can connect the CompactRIO controller to a PC. The chassis is available in 4 slot and 8 slot varieties.
Third-party modules are manufactured for additional features, such as LCD or VGA displays. Newer, high-performance CompactRIO controllers also have built-in VGA graphics which can be connected to a monitor for observing operation.
Software
CompactRIO controllers can be programmed with LabVIEW, National Instruments' graphical programming language; C; C++; or Java. LabVIEW must be used to program the embedded FPGA.
The controller comes with a Linux based RTOS, NI Linux Real-Time, created as part of the Linux Foundation's Real-Time Linux Collaborative Project. Programs created in LabVIEW are compiled into machine code for NI Linux Real-Time and hardware description language (HDL) for the Xilinx FPGA toolchain automatically during deployment of the code to the target.
The Linux Real-Time OS running in the real-time controller supports a filesystem and hence data logging is also available at the controller level. The Full Development System version of LabVIEW does not come with the modules needed to program the cRIO. The Real-Time Module and FPGA Modules have to be purchased separately and installed with LabVIEW for programming the hardware. The programming is done on a Host PC running the Windows operating system and is deployed on the cRIO via Ethernet.
Applications
CompactRIO systems are often used as an industrial control unit, where a small form factor are important.
CompactRIO is commonly used as headless systems (without a user interface) which are designed to run in a confined space, under harsh conditions. CompactRIO systems can also be connected to a host PC which can be used for supervisory purposes and for displaying logged data.
Other examples of applications areas are: Intelligent Systems for the Industrial Internet of Things (IIoT), Power Electronics and Inverter Control, Condition Monitoring of Rotating Equipment, Power Quality Monitoring, Transportation and Heavy Equipment, and Laser or Hydraulic Control.
The CompactRIO was used from 2009 until 2015 as the primary control unit in the FIRST Robotics Competition. It has been replaced now by the National Instruments roboRIO.
See also
CompactDAQ
roboRIO
References
External links
CompactRIO
Programmable logic controllers
Real-time computing |
54525921 | https://en.wikipedia.org/wiki/Promoting%20Resilience%20and%20Efficiency%20in%20Preparing%20for%20Attacks%20and%20Responding%20to%20Emergencies%20%28PREPARE%29%20Act%20of%202017 | Promoting Resilience and Efficiency in Preparing for Attacks and Responding to Emergencies (PREPARE) Act of 2017 | The Promoting Resilience and Efficiency in Preparing for Attacks and Responding to Emergencies Act, or PREPARE Act, of 2017 (H.R. 2922) is a bill introduced in the United States House of Representatives by U.S. Representative Dan Donovan (R-New York). The bill would assist American attempts to protect the nation from potential terror attacks and fortify emergency response capabilities through reauthorizing grants for programs that are necessary for disaster relief.
Some of the applications of these grants include training for first responders and improving communication between different levels of government to better respond to cyber threats. The bill also aims to promote transparency and minimize fraud by holding the Federal Emergency Management Agency to account. Additionally, the bill forces federal agencies to meet a standard of readiness so that they are always prepared to respond effectively and efficiently during emergency situations. Representative Donovan serves as the current chairman of the House Subcommittee on Emergency Preparedness, Response, and Communications.
In June 2017, the PREPARE Act was unanimously passed by the House Committee on Homeland Security as a component of the Department of Homeland Security Authorization Act of 2017.
In addition to the House Committee on Homeland Security, the bill was referred to the House Committee on Transportation and Infrastructure, House Committee on Energy and Commerce, and Subcommittee on Emergency Preparedness, Response and Communications. As of mid-July 2017, the bill currently awaits further action in the House.
The bill has a total of 3 cosponsors: Representatives Michael McCaul (R-Texas), Peter King (R-New York), and Brian Fitzpatrick (R-Pennsylvania).
Donovan introduced the bill as a result of a spike in severe terror attacks and natural disasters around the world. He said, "This bill provides our first responders and communities with the resources they need to prevent and prepare for emergency situations, while also helping ensure that our agencies are constantly improving our federal response capabilities." As additional motivations for writing the PREPARE Act, Donovan cited a Homeland Security Committee report that discovered 39 jihadi plots that were completely developed in the United States, along with the London and Manchester terror attacks. Rather than simply focus on security threats posed by terrorism, Donovan lamented increasingly frequent and severe natural disasters. He pointed to 15 different extreme weather events in the United States in 2016 alone, each causing more than $1 billion in economic loss.
Background
As threats against the United States from a multitude of sources continue to expand, terror attacks and climate change have become particular areas of focus. In the past, the federal government has not been completely prepared for either terror attacks or consequences of worsening climate change. The PREPARE Act would provide solutions to issues in American defense strategies so that the government can become more capable of responding to emergency situations, especially those posed by terror and climate change.
Explaining lack of American readiness for certain terror attacks, Michael Waltz, former U.S. Army Special Forces commander, said, "... the Department of Homeland Security and the FBI are lagging and need to get much smarter to focus… before there's a major incident we all fear."
Although the Department of Homeland Security does not publicly release data about the number of terror attack attempts per day and lives saved as a result of its efforts due to the inherently private nature of the information, the department plays a central role in preventing terror attacks and protecting the American public. The PREPARE Act aims to reinforce the Department of Homeland Security's mission of terrorism prevention by more clearly outlying responsibilities of its many units.
American security has been damaged in the past as a result of cyber attacks, which are discussed under the PREPARE Act. Thirteen federal agencies and institutions have been hacked since 2014, including the Department of Health and Human Services, the White House, the United States Postal Service, the Department of State, the Federal Aviation Administration, the Internal Revenue Service, and the Office of Personnel Management, among others. These hacks affected tens of millions of government employees and private citizens. Cyber threats continue to grow, especially against the United States government and its subdivisions.
Natural disasters have also become an increasingly dangerous threat to the American public, and the PREPARE Act aims to make emergency and relief services more efficient so that as many victims can be assisted as quickly as possible. Over the past five decades, most of the United States has witnessed spikes in extended periods of excessively warm temperatures, torrential storms, and severe floods and droughts. In 2011 and 2012, the number of intense heat waves was approximately triple the long-term average. The 2011 drought in Texas and Oklahoma was a result of more than 100 days over 100 degrees Fahrenheit. Both states set new records for the hottest summer since record-keeping began in 1895. Depleted water sources led to $10 billion in direct losses to agriculture in those two states alone. In other regions, heavy rainfall has led to significant flooding problems in certain regions. Between 1959 and 2005, 4,586 Americans have been killed by floodwaters. Damage to property and crops averaged $8 billion per year between 1981 and 2011. Since the early 1980s, when scientists began to collect accurate data on hurricanes, Atlantic hurricane storms have grown in intensity, frequency, and duration, partially as a result of higher surface temperatures of ocean waters. Models project future hurricanes to be the strongest in recorded history.
The PREPARE Act aims to minimize fraud in government response to natural disasters through the inclusion of language from the Flood Insurance Mitigation and Policyholder Protection Act. A 60 Minutes report found that engineers who were working for insurance companies that were operating under the Federal Emergency Management Agency altered damage reports of homes that were harmed by Hurricane Sandy. The engineers reported less damage than had actually occurred in changed reports, in attempts to decrease the amount of money given to homeowners who were desperate for repair funds.
Major provisions
The PREPARE Act aims to equip emergency response services with necessary tools to respond in an optimal fashion to challenges to American security, especially those posed by terror and natural disasters. Because the PREPARE Act discusses a wide variety of Department of Homeland Security operations, the bill is organized into four key sections:
Title I - Grants, Training, Exercises, and Coordination. This section of the bill retains funds allocated to the Urban Area Security Initiative and describes the state Homeland Security Grant Program, along with other programs that seek to aid law enforcement officers in their continuous battle against terrorism (both traditional and cyber).
Title II - Communications. This section of the bill includes details on the Office of Emergency Communications, describes the role of the Office of Emergency Communications Director, explains the responsibilities of the National Emergency Communications Plan, and encourages the sharing of information between agencies to mitigate cyber threats through discussion of the Public Safety Broadband Network.
Title III - Medical Preparedness. In this portion of the bill, Donovan discusses the position of the Chief Medical Officer and a Medical Countermeasures Program that would better prepare the nation's readiness for a health threat, especially resources operating under the Department of Homeland Security itself.
Title IV - Management. The final area of the bill provides details on mission support, modernization of system, and a plan for supporting human capital.
Donovan outlined the major aspects of the PREPARE Act that he believes will best aid the Department of Homeland Security:
Reauthorization of grant programs responsible for training and providing resources to first responders.
Improvement of transparency and accountability related to Federal Emergency Management Agency grant usage.
Implementation of minimum readiness standards to guarantee that federal agencies are prepared to respond to disasters at the time they happen to strike.
Enhancement of cyber-threat and information sharing between authorities at all levels, including local, state, and federal.
Requirement that federal departments and agencies work together to close capability gaps that are exposed in after-action reports.
Particular clauses of the bill amend operations of the Department of Homeland Security in new ways. For example:
The Allowable Uses clause authorizes two new uses of funding. The first is to reinvigorate the American response to potential medical crises, including the creation and continued upkeep of a pharmaceutical stockpile, which would include medical kits and protective services for first responders, victims, and impacted populations after chemical or biological attacks. The second is to optimize cybersecurity to strengthen preparation and potential to effectively respond to a wide range of cyber threats.
The Port Security Program authorizes $200 million per year for grants through 2022.
The Department of Homeland Security would be required to assist fusion centers in a more active way to protect cybersecurity. The department would have to provide technical support, information to indicate cyber threats, defense techniques, potential risks to cybersecurity, and knowledge specific to maintaining technological integrity during elections.
Under the Medical Countermeasures Program, the DHS would be responsible for enhancing the readiness of personnel to protect resources (including employees and animals) in the case of a chemical, biological, radiological, nuclear, or explosives attack.
Regarding FEMA fraud, the PREPARE Act aims to minimize issues by amending procedures several ways. The language in the bill forces engineers to give copies of original reports to homeowners, so accountability can be maximized. Further, the bill gives more time to homeowners to file appeals or pursue lawsuits in court if the National Flood Insurance Program denies claim appeals. The bill extends the time period from one year to two years to file an appeal, and allows people to file lawsuits up to ninety days after a claim is denied.
Legislative history
During the 114th Congress, on September 22, 2015, Representative Martha McSally (R-Arizona) introduced an earlier version of the bill. The bill was passed in the U.S. House on April 26, 2016, and was received in the Senate and referred to the Senate Committee on Homeland Security and Governmental Affairs on April 27, 2016. No further action was taken.
On June 15, 2017, Donovan introduced the bill. After introduction:
6/15/17 - The bill was referred to the House Energy and Commerce Committee, the House Transportation and Infrastructure Committee, and the House Homeland Security Committee.
6/28/17 - The bill was referred to the committee's Subcommittee on Emergency Preparedness, Response, and Communications.
As of July 11, 2017, the bill awaits further action in the House of Representatives.
See also
Emergency management
First responders
References
External links
FEMA official website
National Preparedness System (FEMA-operated program)
United States Department of Homeland Security official website
Proposed legislation of the 115th United States Congress |
39120497 | https://en.wikipedia.org/wiki/Navitaire%20Inc%20v%20Easyjet%20Airline%20Co.%20and%20BulletProof%20Technologies%2C%20Inc. | Navitaire Inc v Easyjet Airline Co. and BulletProof Technologies, Inc. | Navitaire Inc v Easyjet Airline Co. and BulletProof Technologies, Inc., is a decision by the England and Wales High Court of Justice (Chancery Division). The case involved a copyright infringement claim brought by Navitaire Inc. ("Navitaire") against EasyJet Airline Company ("EasyJet") and Bulletproof Technologies, Inc. ("Bulletproof") with regards to software used to construct an airline booking (ticket reservation) system. Curiously, it was not claimed that Defendant had access to the original source code or that Defendant's source code resembled Plaintiff's in any way.
The case affirms that it is only the source code or object code of a program - i.e. the underlying framework - that may be protected by copyright. The programming language used to create the program, as well as the program's functional aspects and interfaces, are not to be protected. This is because computer programs are unique as one can achieve a similar end result through different means. However, artistic aspects may be protected. That is, copyright subsists in visual images created as icons or Graphical User Interfaces (GUIs) and the Directive on the Legal Protection of Computer Programs will not apply to these images. Specific to this case, it was held that writing original source code that results in a similar or an identical function to another program does not result in infringement of that program.
Navitaire also confirmed the notion that an injunction would be granted only where it wasn't oppressive.
The Navitaire Court's approach has been confirmed in other opinions. In the Court of Appeals' 2007 decision of "Nova Productions Limited vs. Mazooma Games Limited", the court held that under a program did not infringe on another where it produces similar results but has different underlying source code.
Attorneys
Henry Carr QC, Mark Vanhegan and Anna Edwards-Stuart (instructed by Field Fisher Waterhouse) for the Claimants
Richard Arnold QC and Brian Nicholson (instructed by Herbert Smith) for the Defendants
The Parties
Claimant: Navitaire Inc. ("Navitaire") developed a system called "OpenRes," which is a ticketless airline booking application used by a number of airlines. Users do not receive a ticket, but are given a single reference number to check-in at the airport with. Navitaire owns the copyright in various works that make up the source code of the OpenRes software. OpenRes is predominantly coded in COBOL.
Navitaire's predecessors were Open Skies, Inc. and the Open Skies Division of Hewlett Packard. Open Skies coded and developed the web interface for OpenRes, called "TakeFlight". The court refers to Navitaire and Open Skies, Inc. collectively as Navitaire. The "TakeFlight" module consists only of source code.
Defendant 1: EasyJet Airline Co. ("Easyjet") is a well-known low-cost airline. Navitaire's predecessors had granted Easyjet a license for OpenRes.
Defendant 2: BulletProof Technologies Inc. ("BulletProof") is a software developer located in California, who was hired by EasyJet to code the allegedly infringing system, "eRes".
Facts
Navitaire developed an airline booking system called "OpenRes". Its predecessor (Open Skies) had licensed the software to easyJet. easyJet did not have access to the underlying code, and Navitaire does not suggest that easyJet or BulletProof had access to "OpenRes". However, after studying the functionality of OpenRes, easyJet and Bulletproof developed a system called "eRes" and also a web interface that was substantially indistinguishable from OpenRes. easyJet does not dispute the allegation that it wanted a new system that was substantially indistinguishable from OpenRes' interface. Also, it is not disputed that the underlying code of eRes does not resemble OpenRes' codes at all. easyJet created its program by studying and observing how Navitaire's system worked.
Although the code is different, "eRes" acts upon identical or very similar inputs and produces similar results as "OpenRes". Thus, Navitaire filed an action in copyright law based alleging copyright infringement based on "non-textual copying." Specifically, Navitaire claimed that the similarity of the "business logic" (that is the overall look and feel of the software) and functionality of the software rose to the level of "non-textual copying."
Also, with regards to "TakeFlight" it is known that easyJet copied and modified on several occasions to fix bugs, provide for the display of promotions, etc. and to provide the foreign language interfaces as the code was not internationalized. Navitaire alleges easyJet breached the license and again is alleging "non-textual copying" of the software when easyJet produced a user interface with the same "look and feel" of TakeFlight.
As a general matter, the Court stated that: "To emulate the action of a piece of software by the writing of other software that has no internal similarity to the first but is deliberately designed to 'look' the same and achieve the same results is far from uncommon. If Navitaire are right in their most far-reaching submission, much of such work may amount to the infringement of copyright in the original computer program, even if the alleged infringer had no access to the source code for it and did not investigate or decompile the executable program."
Non-Textual Copying
Non-Textual Copying can be raised when access is not an issue. There are three aspects of non-contextual copying:
adoption of the "look and feel" of the software (here, OpenRes)
detailed copying of many of the individual commands entered by the user to achieve particular results
copying of certain of the results, in the form of screen displays and of 'reports' displayed on the screen in response to prescribed instructions
The Interfaces
The OpenRes system consists of a database as well as a series of programs that manipulate the data. Each interface consists of single and complex commands that were entered by the user and the relevant display screens.
These different interfaces were:
the terminal user interface - what the travel agent interacts with by typing commands; this interface takes the commands an Agent has input, recognizes the commands, and then formats the result of those commands to be displayed on the 'green screen' (aka the terminal user interface)
the Fares and Scheduling interface - the appearance of the graphical user interface (GUI) at the database administrator's terminal
Internet User Interface or TakeFlight interface- the screens that the user interacts with on the Internet on their personal computer
structure of the OpenRes database and the names of objects stored within
Navitaire's Allegations
With regards to the databases, Navitaire claimed copyright infringement occurred at two points. (1) In transferring or 'migrating' the data contained in OpenRes databases to the new easyRes system. easyJet made interim copies of existing OpenRes databases that they were not granted permission to do. (2) easyJet and BulletProof used their knowledge of the OpenRes databases to design eRes databases such that they infringed copyright of the structure of the program. The court did not find infringement since the databases are not manipulated in the same way as OpenRes.
Navitaire alleged eRes violated copyright when they replicated the overall "look and feel" of the software (i.e. "business logic"); relied on and required identical or similar commands to be entered by an operator as in the OpenRes system; copying the icons displayed in the GUI; and copying the text-based screen displays as well as other results produced by the software.
Four Classes of Relevant Copyright Works
There were four classes of relevant copyright works that were identified:
literary works: comprising the title, form, and nature of each of the literary codes represented by the user command codes
complex commands - literary codes were divided into simple commands and complex commands, with complex commands allowing varying arguments. Dr. Hunt described these as being ones "where the user enters a mixture of command characters and data and has a number of sub-options or choices."
all OpenRes user command codes ("compilation") - this was an alternate legal basis to allege that eRes commands are identical to or similar to OpenRes commands
the layouts of particular screens of the terminal user interface
Summary of Allegations
In summary, the issues are as follows. Navitaire contend that copyright subsists in the command set as a copyright work distinct from the source code. This claim has a number of aspects: (i) the collection of commands as a whole is entitled to copyright as a 'compilation'; (ii) each of the commands is a copyright work in its own right; (iii) alternatively, each of the 'complex' commands is a work in its own right. As to the displays, Navitaire contend that (i) in respect of the VT100 screen displays, the 'template' (fixed data and layout of variable data) is a separate copyright work for each display and (ii) certain GUI screens on the separate Schedule Maintenance module are copyright works as they stand and have been copied. Then it is said (and this is a quite distinct allegation) that the similarity exhibited by eRes to OpenRes in the eye of the user is such that there has been 'non-textual copying' of the whole of the source code. This is said to be strictly analogous to taking the plot of a book[12]: an author who takes the plot of another work and copies nothing else will still infringe copyright if a substantial part of the earlier author's work is represented by that plot, and the same goes for computer programs: John Richardson Computers v Flanders [1993] FSR 497 (Ferris J).
easyJet's Position
easyJet accepts that copyright subsists in the source code of OpenRes. However, they stressed to the court that with regards to the user interface, the only question to be considered was whether a substantial part was taken since none of the code was directly copied. eRes only used some of the command names, which is not a substantial part of the source code. They did not agree that copyright subsists in a command set either as individual commands or as a compilation, since these were not works to begin with. Moreover, the graphical displays were also not works. So, they contended that "non-contextual copying" would extend protection in copyright to matters that cannot legitimately be covered by copyright. That is, it was argued that the claim went to the "functional idea of the program, rather than to the expression of that idea in software."
Issue(s)
Does replication of the "look and feel" of a computer program - using similar inputs and that produces similar outputs - count as "non-textual" copying of a computer program? That is, can the "business logic" of a program be protected?
Does copyright subsist in single word commands, complex commands, or the collection of commands as a whole? If yes, which ones?
Does copyright subsist in graphical screen displays and icons? If so, which ones?
Holding
(1) NO - When it comes to computer programs, copyright law does not protect against "non-textual" copying.
As long as the underlying source code is different, there is no problem if the ultimate Look and Feel are similar. In the present case, the fact easyJet didn't have access aided the court in finding no infringement. Moreover, the court took into account that the peculiar aspect of computer programs is that there are several different ways of producing a similar or identical result. So, the "business logic" (i.e. functionality) of a program can not be protected by copyright law. Finding otherwise, would extend copyright unjustifiably.
Based on Navitaire, merely copying the look and feel of a program or website does not rise to the level of infringement of copyright. That is, using underlying ideas and principles, without copying the actual expression (source code) does not infringe on another even if the functionality is the same. However, the appearance
may be argued as infringing on an artistic work (see details below).
The Metaphor:
Navitaire urged that easyJet's studying the OpenRes system's functionality resulted in taking a substantial part of the source code, was similar to reading a novel, taking the plot, and using that same plot in a new novel. However, the court disagreed with this reasoning and found that computer programs and code were not like a novel. Instead, the court found that the case was more like a chef who invents pudding using a different recipe than original, but derives a similar result. That is, one chef after several tries comes up with a tasty pudding dish and writes down the recipe. Another chef then tries it and decides to recreate it, but comes up with his own recipe. This would not be considered infringement as although the results were the same, the means to derive them were different.
(2) NO - Single words, complex commands, and compilation of commands do not qualify as literary works.
Single word commands do not qualify as literary works and do not have the necessary qualities of a literary work. Based on the 1988 Act, the test to be considered is "merely whether a written artefact is to be accorded the status of a copyright work having regard to the kind of skill and labour expended, the nature of copyright protection and its underlying policy."
Complex commands (i.e. commands that have a syntax or have one or more arguments that must be expressed in a particular way) also do not qualify. The 1988 Act mandates that a literary work be written or recorded. Moreover, Recitals 13-15 of the Software Directive reinforce that computer languages may not be copyrighted. In the present case, these was no identifiable "literary work" that embodied command codes. Similarly, collections of commands count as a language and can not be protected as a compilation. Protection of a computer program may not be extended to functionality alone.
"Copyright protection for computer software is a given, but I do not feel that the courts should be astute to extend that protection into a region where only the functional effects of a program are in issue. There is a respectable case for saying that copyright is not, in general, concerned with functional effects, and there is some advantage in a bright line rule protecting only the claimant's embodiment of the function in software and not some superset of that software. The case is not truly analogous with the plot of a novel, because the plot is part of the work itself. The user interface is not part of the work itself. One could permute all the letters and other codes in the command names, and it would still work in the same way, and all that would be lost is a modest mnemonic advantage. To approach the problem in this way may at least be consistent with the distinction between idea and expression that finds its way into the Software Directive, but, of course, it draws the line between idea and expression in a particular place which some would say lies too far on the side of expression. I think, however, that such is the independence of the particular form of the actual codes used from the overall functioning of the software that it is legitimate to separate them in this way, and not to afford them separate protection when the underlying software is not even arguably copied."
(3) YES BUT NOT ALL - Only an artistic work can rise to the level of copyright protection.
In the present case, the VT100 screen displays did not rise to this level. They were considered to be tables and found to be literary in character. Based on Article 1(2) of the Directive, these were simply ideas underlying the computer program's interfaces, providing merely "the static framework for the display of the dynamic data which it is the task of the software to produce."
However, the Graphic User Interfaces (GUIs) and icons qualified as artistic works and were given protection. This was due to the skill and labour required to arrange the screens in a certain way. The icons too were copyrighted works. The court found that since the icons used in the GUIs were copyrighted works, and easyJet had made identical copies, easyJet had infringed Navitaire's copyright.
Claims with regards to TakeFlight
easyJet managed to convince the court that as licensees they were permitted to alter and modify the program to resolve any bugs and make modifications that they were hired to do. However, the court found infringement where database fields were reproduced unnecessarily.
Summary of Elements at Issue and the Court's Decision regarding Protection
Relevant Law
Copyright Law and Directive No. 96/9/EC
In the US and the UK, a Copyright grants protection for the expression of an idea and not an idea itself. A Database right according to the European Union is a property right. In the EU it is defined by Directive No. 96/9/EC This Directive on the legal protection of databases was passed on 11 March 1996 and affords legal protection to databases granting both specific and separate legal rights and limitations to computer records. These rights are collectively known as database rights.
Article 1(2) of the Software Directive
See for Full Text. The court analyzed this section as making clear the important dichotomy of copyright law that ideas are not protected, but the expression is. Thus, the code as written is protected, but not the interfaces, function, or programming language.
Copyright, Designs and Patents Act 1988
''See for Full Text
Literary, dramatic and musical works
3.—(1) In this Part—
"literary work' means any work, other than a dramatic or musical work, which is written, spoken or sung, and accordingly includes—
(a) a table or compilation other than a database,
(b) a computer program,
(c) preparatory design material for a computer program and
(d) a database;
…
(2) Copyright does not subsist in a literary, dramatic or musical work unless and until it is recorded, in writing or otherwise; and references in this Part to the time at which such a work is made are to the time at which it is so recorded.
Provision Added by reg 15 of the Copyright and Related Rights Regulations 2003
50BA.—(1) It is not an infringement of copyright for a lawful user of a computer program to observe, study or test the functioning of the program in order to determine the ideas and principles which underlie any element of the program if he does so while performing any of the acts of loading, displaying, running, transmitting or storing the program which he is entitled to do.
(2) Where an act is permitted under this section, it is irrelevant whether or not there exists any term or condition in an agreement which purports to prohibit or restrict the act (such terms being, by virtue of section 296A, void).
See also
Intellectual Property
References
Further reading
Other EWHC Opinions
Full Text of Case
Copyright Law in the UK - Essential Reading
Another Opinion by Court of Appeals in Nova Productions Limited vs. Mazooma Games Limited validating Navitaire's line of reasoning
Can Copyright Law be Used to Protect the Look and Feel of a Website?
Indirect Copying of Computer Programs - Infringing or Non-infringing?
Sampson, Geoffrey, "Law for Computing Students" - Navitaire discussed on page 52
Databases in the United Kingdom |
3064285 | https://en.wikipedia.org/wiki/Mutual%20authentication | Mutual authentication | Mutual authentication or two-way authentication (not to be confused with two-factor authentication) refers to two parties authenticating each other at the same time in an authentication protocol. It is a default mode of authentication in some protocols (IKE, SSH) and optional in others (TLS).
Mutual authentication is a desired characteristic in verification schemes that transmit sensitive data, in order to ensure data security. Mutual authentication can be accomplished with two types of credentials: usernames and passwords, and public key certificates.
Mutual authentication is often employed in the Internet of Things (IoT). Writing effective security schemes in IoT systems can become challenging, especially when schemes are desired to be lightweight and have low computational costs. Mutual authentication is a crucial security step that can defend against many adversarial attacks, which otherwise can have large consequences if IoT systems (such as e-Healthcare servers) are hacked. In scheme analyses done of past works, a lack of mutual authentication had been considered a weakness in data transmission schemes.
Process steps and verification
Schemes that have a mutual authentication step may use different methods of encryption, communication, and verification, but they all share one thing in common: each entity involved in the communication is verified. If Alice wants to communicate with Bob, they will both authenticate the other and verify that it is who they are expecting to communicate with before any data or messages are transmitted. A mutual authentication process that exchanges user IDs may be implemented as follows:
Alice sends an encrypted message to Bob to show that Alice is a valid user.
Bob verifies message:
Bob checks the format and timestamp. If either is incorrect or invalid, the session is aborted.
The message is then decrypted with Bob's secret key and Alice's ID.
Bob checks if the message matches a valid user. If not, the session is aborted.
Bob sends Alice a message back to show that Bob is a valid user.
Alice verifies the message:
Alice checks the format and timestamp. If either is incorrect or invalid, the session is aborted.
Then, the message is decrypted with Alice's secret key and Bob's ID.
Alice checks if the message matches a valid user. If not, the session is aborted.
At this point, both parties are verified to be who they claim to be and safe for the other to communicate with. Lastly, Alice and Bob will create a shared secret key so that they can continue communicating in a secure manner.
To verify that mutual authentication has occurred successfully, Burrows-Abadi-Needham logic (BAN logic) is a well regarded and widely accepted method to use, because it verifies that a message came from a trustworthy entity. BAN logic first assumes an entity is not to be trusted, and then will verify its legality.
Defenses
Mutual authentication supports zero trust networking because it can protect communications against adversarial attacks, notably:
Man in the middle attack: Man-in-the-middle (MITM) attacks are when a third party wishes to eavesdrop or intercept a message, and sometimes alter the intended message for the recipient. The two parties openly receive messages without verifying the sender, so they do not realize an adversary has inserted themselves into the communication line. Mutual authentication can prevent MITM attacks because both the sender and recipient verify each other before sending them their message keys, so if one of the parties is not verified to be who they claim they are, the session will end.
Replay attacks: A replay attack is similar to a MITM attack in which older messages are replayed out of context to fool the server. However, this does not work against schemes using mutual authentication because timestamps are a verification factor that are used in the protocols. If the change in time is greater than the maximum allowed time delay, the session will be aborted. Similarly, messages can include a randomly generated number to keep track of when a message was sent.
Spoofing attacks: Spoofing attacks rely on using false data to pose as another user in order to gain access to a server or be identified as someone else. Mutual authentication can prevent spoofing attacks because the server will authenticate the user as well, and verify that they have the correct session key before allowing any further communication and access.
Impersonation attacks: When each party authenticates the other, they send each other a certificate that only the other party knows how to unscramble, verifying themselves as a trusted source. In this way, adversaries cannot use impersonation attacks because they do not have the correct certificate to act as if they are the other party.
Mutual authentication also ensures information integrity because if the parties are verified to be the correct source, then the information received is reliable as well.
mTLS
By default the TLS protocol only proves the identity of the server to the client using X.509 certificates, and the authentication of the client to the server is left to the application layer. TLS also offers client-to-server authentication using client-side X.509 authentication. As it requires provisioning of the certificates to the clients and involves less user-friendly experience, it's rarely used in end-user applications.
Mutual TLS authentication (mTLS) is more often used in business-to-business (B2B) applications, where a limited number of programmatic and homogeneous clients are connecting to specific web services, the operational burden is limited, and security requirements are usually much higher as compared to consumer environments.
mTLS is also used in microservices-based applications based on runtimes such as Dapr, via systems like SPIFFE.
Lightweight schemes vs. secured schemes
While lightweight schemes and secure schemes are not mutually exclusive, adding a mutual authentication step to data transmissions protocols can often increase performance runtime and computational costs. This can become an issue for network systems that cannot handle large amounts of data or those that constantly have to update for new real-time data (e.g. location tracking, real-time health data).
Thus, it becomes a desired characteristic of many mutual authentication schemes to have lightweight properties (e.g. have a low memory footprint) in order to accommodate the system that is storing a lot of data. Many systems implement cloud computing, which allows quick access to large amounts of data, but sometimes large amounts of data can slow down communication. Even with edge-based cloud computing, which is faster than general cloud computing due to a closer proximity between the server and user, lightweight schemes allow for more speed when managing larger amounts of data. One solution to keep schemes lightweight during the mutual authentication process is to limit the number of bits used during communication.
Applications that solely rely on device-to-device (D2D) communication, where multiple devices can communicate locally in close proximities, removes the third party network. This in turn can speed up communication time. However, the authentication still occurs through insecure channels, so researchers believe it is still important to ensure mutual authentication occurs in order to keep a secure scheme.
Schemes may sacrifice a better runtime or storage cost when ensuring mutual authentication in order to prioritize protecting the sensitive data.
Password-based schemes
In mutual authentication schemes that require a user's input password as part of the verification process, there is a higher vulnerability to hackers because the password is human-made rather than a computer-generated certificate. While applications could simply require users to use a computer-generated password, it is inconvenient for people to remember. User-made passwords and the ability to change one's password are important for making an application user-friendly, so many schemes work to accommodate the characteristic. Researchers note that a password based protocol with mutual authentication is important because user identities and passwords are still protected, as the messages are only readable to the two parties involved.
However, a negative aspect about password-based authentication is that password tables can take up a lot of memory space. One way around using a lot of memory during a password-based authentication scheme is to implement one-time passwords (OTP), which is a password sent to the user via SMS or email. OTPs are time-sensitive, which means that they will expire after a certain amount of time and that memory does not need to be stored.
Multi-factor authentication
Recently, more schemes have higher level authentication than password based schemes. While password-based authentication is considered as "single-factor authentication," schemes are beginning to implement smart card (two-factor) or biometric-based (three-factor) authentication schemes. Smart cards are simpler to implement and easy for authentication, but still have risks of being tampered with. Biometrics have grown more popular over password-based schemes because it is more difficult to copy or guess session keys when using biometrics, but it can be difficult to encrypt noisy data. Due to these security risks and limitations, schemes can still employ mutual authentication regardless of how many authentication factors are added.
Certificate based schemes and system applications
Mutual authentication is often found in schemes employed in the Internet of Things (IoT), where physical objects are incorporated into the Internet and can communicate via IP address. Authentication schemes can be applied to many types of systems that involve data transmission. As the Internet's presence in mechanical systems increases, writing effective security schemes for large numbers of users, objects, and servers can become challenging, especially when needing schemes to be lightweight and have low computational costs. Instead of password-based authentication, devices will use certificates to verify each other's identities.
Radio networks
Mutual authentication can be satisfied in radio network schemes, where data transmissions through radio frequencies are secure after verifying the sender and receiver.
Radio frequency identification (RFID) tags are commonly used for object detection, which many manufacturers are implementing into their warehouse systems for automation. This allows for a faster way to keep up with inventory and track objects. However, keeping track of items in a system with RFID tags that transmit data to a cloud server increases the chances of security risks, as there are now more digital elements to keep track of. A three way mutual authentication can occur between RFID tags, the tag readers, and the cloud network that stores this data in order to keep RFID tag data secure and unable to be manipulated.
Similarly, an alternate RFID tag and reader system that assigns designated readers to tags has been proposed for extra security and low memory cost. Instead of considering all tag readers as one entity, only certain readers can read specific tags. With this method, if a reader is breached, it will not affect the whole system. Individual readers will communicate with specific tags during mutual authentication, which runs in constant time as readers use the same private key for the authentication process.
Many e-Healthcare systems that remotely monitor patient health data use wireless body area networks (WBAN) that transmit data through radio frequencies. This is beneficial for patients that should not be disturbed while being monitored, and can reduced the workload for medical worker and allow them to focus on the more hands-on jobs. However, a large concern for healthcare providers and patients about using remote health data tracking is that sensitive patient data is being transmitted through unsecured channels, so authentication occurs between the medical body area network user (the patient), the Healthcare Service Provider (HSP) and the trusted third party.
Cloud based computing
e-Healthcare clouds are another way to store patient data collected remotely. Clouds are useful for storing large amounts of data, such as medical information, that can be accessed by many devices whenever needed. Telecare Medical Information Systems (TMIS), an important way for medical patients to receive healthcare remotely, can ensure secured data with mutual authentication verification schemes. Blockchain is one way that has been proposed to mutually authenticate the user to the database, by authenticating with the main mediBchain node and keeping patient anonymity.
Fog-cloud computing is a networking system that can handle large amounts of data, but still has limitations regarding computational and memory cost. Mobile edge computing (MEC) is considered to be an improved, more lightweight fog-cloud computing networking system, and can be used for medical technology that also revolves around location-based data. Due to the large physical range required of locational tracking, 5G networks can send data to the edge of the cloud to store data. An application like smart watches that track patient health data can be used to call the nearest hospital if the patient shows a negative change in vitals.
Fog node networks can be implemented in car automation, keeping data about the car and its surrounding states secure. By authenticating the fog nodes and the vehicle, vehicular handoff becomes a safe process and the car’s system is safe from hackers.
Machine to machine verification
Many systems that do not require a human user as part of the system also have protocols that mutually authenticate between parties. In unmanned aerial vehicle (UAV) systems, a platform authentication occurs rather than user authentication. Mutual authentication during vehicle communication prevents one vehicle's system from being breached, which can then affect the whole system negatively. For example, a system of drones can be employed for agriculture work and cargo delivery, but if one drone were to be breached, the whole system has the potential to collapse.
See also
Authentication
Authentication protocol
TLS
Computer security
BAN Logic
Digital signature
Secure channel
Zero trust security model
External links
Two types of Mutual Authentication
References
Authentication methods
Computer access control |
35297710 | https://en.wikipedia.org/wiki/Franklin%20H.%20Westervelt | Franklin H. Westervelt | Franklin Herbert Westervelt ( – ) was an American engineer, computer scientist, and educator at the University of Michigan and Wayne State University. Westervelt received degrees in Mathematics, Mechanical and Electrical Engineering from the College of Engineering at the University of Michigan. He attained his PhD in 1961. He was a Professor of Mechanical Engineering at the University of Michigan and an Associate Director at the U-M Computing Center. He was involved in early studies on how to use computers in engineering education.
Biography
He was born in Benton Harbor, Michigan on to Herbert Oleander Westervelt and Dorothy Ulbright.
From 1965 to 1970 he was Project Director for the ARPA sponsored CONCOMP (Research in Conversational Use of Computers) Project. He was involved in the design of the architecture and negotiations with IBM over the virtual memory features that would be included in what became the IBM S/360 Model 67 computer. When IBM's TSS/360 time-sharing operating system for the S/360-67 was not available, the CONCOMP project supported the initial development of Michigan Terminal System (MTS) in cooperation with the staff of the University of Michigan Computing Center. This included David L. Mills development of the original PDP-8 Data Concentrator with its interface to an IBM S/360 Input/Output channel, the first such interface to be built outside of IBM. CONCOMP also developed the integration for the IBM 7772 based Audio Response Unit (ARU) as an MTS I/O device, the MAD/I compiler, mini-computer based graphics terminals, and the Set-Theoretic Data Structure model that was later used in ILIR:MICRO.
ARPANET program manager Larry Roberts asked Frank to explore the questions of message size and contents for the ARPANET, and to write a position paper on the intercomputer communication protocol including “conventions for character and block transmission, error checking and re transmission, and computer and user identification." Frank also served as a representative to the statewide Michigan Inter-university Committee on Information Systems (MICIS) and was involved in establishing the MERIT Computer Network.
Fred Gibbons, a successful entrepreneur and venture capitalist, said that the University of Michigan College of Engineering, where he earned his BSE and MSE degrees in the late 1960s and early 1970s when computers were unknown or a novelty in most classrooms and the school didn’t even offer a formal computer major, "... was at the forefront of technology that turned out to be very important to me personally, and I got early exposure to it from a couple of great guys–professors Frank Westervelt and Bernard Galler."
U-M Vice President for Research Geoffrey Norman, writing in 1976, gave special credit to the triumvirate of Michigan computer specialists who contributed greatly to the future of computing at Michigan and in the nation as a whole. "Bartels, Arden, and Westervelt," Norman has said, "were a team that we took great care should not be broken up or induced to leave the University. Westervelt, the hardware expert, Arden, brilliant in software and logic, and Bartels
orchestrating their progress-these three put together a superb timesharing computer system. The University and their faculty colleagues owe them much."
Frank Westervelt served Wayne State University from 1971 to 1982 as Director of the Computing Service Center, from 1982 to 2000 as professor in the Department of Electrical and Computer Engineering where he was Associate Chair and Undergraduate Officer from 1990 to 1994 and Chair from 1995 to 2000. He started interactive distance learning within ECE organizing, designing, and developing electronics classrooms and writing software to ease development of electronic presentations. He obtained the contract to develop and deliver the first ECE Course (ECE 562) to ECCE Master’s Program students at Ford Motor Company by Distance Learning methods. In honor of his services Ford Motor Company presented him with the 1993 Customer driven Quality Award as a Member of Ford/Wayne State University Interactive Distance Education Program Team, the only award given by Ford to a university faculty member in 1993.
Frank died on at his home in Ann Arbor, Michigan.
References
1930 births
2015 deaths
American computer scientists
American engineers
University of Michigan faculty
Wayne State University faculty
People from Ann Arbor, Michigan
People from Benton Harbor, Michigan
University of Michigan College of Engineering alumni |
979292 | https://en.wikipedia.org/wiki/Polymorphic%20engine | Polymorphic engine | A polymorphic engine (sometimes called mutation engine or mutating engine) is a software component that uses polymorphic code to alter the payload while preserving the same functionality.
Polymorphic engines are used almost exclusively in malware, with the purpose of being harder for antivirus software to detect. They do so either by encrypting or obfuscating the malware payload.
One common deployment is a file binder that weaves malware into normal files, such as office documents. Since this type of malware is usually polymorphic, it is also known as a polymorphic packer.
Notable examples of polymorphic engines include MtE (short for Mutation Engine), created in 1992 by a hacker named Dark Avenger, and the engine of the Virut botnet. These engines are usually written in assembly language,
References
Types of malware |
7192877 | https://en.wikipedia.org/wiki/C%C3%BAram%20Software | Cúram Software | Cúram Software was an Irish software company headquartered in Dublin, Ireland with offices in Australia, Germany, India, the United Kingdom and the United States. The company produces Social Enterprise Management (SEM) software and offers consulting services, certification, and training. Their name is an Irish word for "Care and Protection". The company was founded in 1990.
John Hearne, co-founded Cúram, which was initially called IT Design, in 1990 with Ronan Rooney, whom he worked with at Apple Computer. It was renamed to Cúram Software in 2003.
Their main software product was the Cúram Business Application Suite. This administrative software packages that allowed welfare and unemployment agencies to manage the programs they administer. The company's products found government buyers in a number of U.S. states, Canadian Provinces, Australia, New Zealand, Germany, Guernsey, UK, etc. According to Federal Computer Week, purchasers of the software, both in the US and internationally found that by integrating information technology systems, their government would save time and money; reduce waste, fraud and abuse; improve data accuracy; and provide better assistance to individuals and families in a holistic manner.
It was purchased by IBM in December 2011. The purchase of Cúram was part of the investments IBM is making to support the convergence of health and human services issues and gives healthcare organizations the ability to focus on the “whole individual” when addressing the causes of health and disease. The commitment is “unprecedented” in that it allows IBM customers to implement strategies that improve the health of the population as well as the patient experience while also managing the growth of costs. IBM's acquisition of Cúram brought enterprise capabilities into IBM's broader Smarter Care strategy. At the time of acquisition, IBM's software division capabilities expanded significantly when the almost 300 local Cúram staff were added to the 1,000 IBM staff already working on software as part of the Smarter Care offerings. These capabilities resulted in Cúram being chosen as a key component for New York State's Medicaid redesign program.
John Hearne and Ronan Rooney remained with IBM until early 2016, when IBM formed the new IBM Watson Health division, and Cúram software became a part of the offerings in government health and human services.
A system deployed by Cúram for Minnesota health insurance exchange MNsure caused widely reported problems in 2013, including duplicate applications, delayed applications and applicants losing employment being left with no coverage at all for a month or more.
In March, 2014 MNsure's interim CEO Scott Leitz reported that things are working better due to 100 new staff members at its overloaded call center and fewer website errors. According to Leitz, “It’s stable and it is in much better shape than it was in the fall and it’s the best place for people to go.”
In 2014, an IBM/Cúram system deployed for the Ministry of Community and Social Services (Ontario) welfare programme "Ontario Works" caused thousands of duplicate payments, while not paying many welfare recipients at all. Some were unable to pay for rent or electricity, in some cases leading to their eviction.
On Tuesday, April 21, 2015 - Ontario Public Service Employees Union (OPSEU) made a publication highlighting their serious concerns over Cúram implementation and the negative impact to their front-line workers.
In April, 2016 the Government of Ontario awarded IBM a $32M contract despite 18 months of problems associated with the software responsible for tracking Ontarians on social assistance.
Cúram software is still being used today in projects around the world. In Denmark and Catalonia, Spain, the healthcare sector is using Cúram intended to cut re-admissions and improve care. In Florida, the South Florida Behavioral Network is using the software to improve care coordination, preventing gaps in care for people suffering from mental illnesses.
It is still being used to connect benefits administrators, healthcare agencies, clinicians and case managers across more than 40 healthcare and service providers.
US States Currently Using IBM Curam Product as of 11/1/2018:
1)NCFAST (North Carolina)
2)SCMMRP (South Carolina)
3)DC HealthBenefit Exchange (Washington DC)
4)Arkansas
5)Missouri
6)Nebraska
7)San Diego (California)
8)MNSure (Minnesota)
9)Puerto Rico
10)US Virgin Islands
References
External links
Cúram Software home page
Smarter Cities
Article on use of Cúram products by New Zealand Government from The Dominion Post
Cúram Software Utah state implementation featured in Government Technology magazine annual list of 'Doers, Dreamers and Drivers
Story in Government Technology on use of Cúram products being used to simplify social services in Indiana
HPtoday announced a new global agreement
Ontario government, IBM smacked for bungled software project
Gov. Hutchinson Tells DHS To Pause Over-Budget Medicaid Computer System
Systems failure - Our view: We don't excuse state officials for oversight of the health exchange, but IBM also needs to be held to public account for its faulty software
EXCLUSIVE: IBM wins $32M Ontario government contract despite delivering problem-riddled software
SAMS: More Than A "Glitch"
IBM Cúram Mobile
Software companies of Ireland |
23442342 | https://en.wikipedia.org/wiki/Nano%20City | Nano City | Nano City was a project proposed by the Haryana government and Sabeer Bhatia (co-founder of Hotmail) to build a city similar to Silicon Valley in northern India. The city was intended to cover 11,000 acres of land near Panchkula.
History
The proposal to construct Nano City was formally approved by the state government of Haryana in September 2006, having first been proposed by Sabeer Bhatia. It envisaged a joint venture between the state-owned Haryana State Industrial and Infrastructure Development Corporation and a private venture owned by Bhatia. The city was to be constructed in two phases, the first covering 5,000 acres of land and the second a further 6,000. The proposal was Real estate firm Parsvnath Developers joined the project in July 2008.
The city was intended to include an airport, a golf course and a rapid transit system. However, by May 2010 no progress had been made. It was reported that Bhatia had failed to submit detailed plans for its construction. In July 2010 the project was cancelled by the HSIIDC.
See also
Information technology in India
References
External links
Buildings and structures in Haryana
Information technology projects
Information technology in India
Urban planning in India
Haryana |
20884016 | https://en.wikipedia.org/wiki/AFL%20%28video%20game%20series%29 | AFL (video game series) | The AFL video game series is a series of Australian rules football video games based on the AFL. Released originally by Beam Software, it has since been developed by several other game developers.
Games in the series
Aussie Rules Footy
Developer: Beam Software
Publisher: Mattel
Released for: NES
Release date: 1991
It was the first AFL video game. The game involves playing a game of Australian rules football from a third-person perspective, with the ability to perform the basic actions of a typical player of the sport. The game can be played by one person, or by two players against each other. There is also a kick to kick mode, and a season mode where one to six players can play multiple games in a season finishing with a grand final. It was developed by Beam Software, and was published by Mattel.
AFL Finals Fever
Developer: Blue Tongue Entertainment
Publisher: Cadability, EA Sports
Released for: Microsoft Windows
Release date: 1996
It was released for Windows PC only on 9 June 1996. You could play as one of the 16 clubs of the 1996 AFL season. It was also the last video game in the series to feature the Fitzroy Lions and the Brisbane Bears as playable teams before they were merged. The game was also the first game to be developed by Blue Tongue Entertainment and was published by Cadability.
AFL 98
Developer: Creative Assembly
Publisher: EA Sports
Released for: Microsoft Windows
Release date: 1997
It was released in 1997 for Microsoft Windows. It was based on the 1997 season. 16 teams were available in the game and it was the first in the series to feature and . It is also the first game in the series to have commentary, which was provided by Bruce McAvaney. The game was developed by Creative Assembly and published by EA Sports.
AFL 99
Developer: Creative Assembly
Publisher: EA Sports
Released for: PlayStation, Microsoft Windows
Release date: 1998
It was released in 1998 for the PlayStation and Microsoft Windows. It was based on the 1998 season and you could play as any of the 16 teams. The commentary is provided by Bruce McAvaney and Leigh Matthews. The game was developed by Creative Assembly and published by EA Sports. The game's music was composed by Jeff van Dyck.
Kevin Sheedy AFL Coach 2002
Developer: IR Gurus
Publisher: Acclaim Entertainment
Released for: Microsoft Windows
Release date: 2001
It was the first AFL video game to be developed by IR Gurus. The game was released as a PC only game. In the game you assume the role of an AFL Coach, you tell your players commands such as the type of play you want them to play (attacking, defensive, Normal) and when to interchange. It sold well for a "then" IR Gurus game but not too well on the market.
AFL Live 2003
Developer: IR Gurus
Publisher: Acclaim Entertainment
Released for: Microsoft Windows, PlayStation 2, Xbox
Release date: 2002
It was released for Microsoft Windows, PlayStation 2 and Xbox. The game is based on the 2002 AFL season with team rosters. It was first released on 5 September 2002 in Australia. It was developed by IR Gurus and published by Acclaim Entertainment. It is also the first game in the series to feature a live action intro of AFL games in the 2003 season. The game was only released in Australia.
AFL Live 2004
Developer: IR Gurus
Publisher: Acclaim Entertainment
Released for: Microsoft Windows, PlayStation 2, Xbox
Release date: 2003
It was released for Microsoft Windows, PlayStation 2 and Xbox on 28 August 2003. The game is based on the 2003 AFL season with team rosters based on that year. AFL Live 2003 includes all 16 official AFL teams and 8 stadiums which were, MCG, Telstra Dome, Optus Oval, Kardinia Park, AAMI Stadium, Subiaco Oval, Gabba, SCG. It also included all 22 home and away matches and the finals series. The game was published by Acclaim with the song Lost Control by Grinspoon as the intro song. It was developed by IR Gurus.
AFL Live: Premiership Edition
Developer: IR Gurus
Publisher: Acclaim Entertainment, THQ
Released for: Microsoft Windows (THQ), PlayStation 2, Xbox (Acclaim)
Release date: 2004
It was released for Microsoft Windows, PlayStation 2 and Xbox on 29 April 2004. The game is based on the 2004 AFL season with team rosters based on that year. It was developed by IR Gurus and was the final AFL game to be published by Acclaim Entertainment, before their bankruptcy on 1 September 2004.
AFL Premiership 2005
Developer: IR Gurus
Publisher: Sony Computer Entertainment, THQ
Released for: Microsoft Windows, PlayStation 2 (Sony Computer Entertainment), Xbox (THQ)
Release date: 2005
It is based on the 2005 AFL season and was released only for the PlayStation 2. This is the next edition after AFL Premiership Edition. When Acclaim shut down its operations in Australia, Sony Computer Entertainment got publishing and distributing rights to the game. Because Sony Computer Entertainment had an exclusive period with the title, initially it was only launched on PlayStation 2. However, THQ released a Microsoft Windows and Xbox version of the game. It was released on 22 September 2005 and is only available in Australia.
AFL Premiership 2006
Developer: IR Gurus
Publisher: Sony Computer Entertainment
Released for: PlayStation 2
Release date: 2006
AFL Premiership 2006 is the tenth game in the series. a follow-up to AFL Premiership 2005, it is based on the 2006 AFL season and was released only for the PlayStation 2. The revamped kicking system requires the players to time the button presses to kick straight, because holding it down for too long results in the ball turning in the opposite side. There are several modes: training mode (provides the basics), short match, Wizard Cup, Premiership and Finals. A newly introduced multiseason allows the management of certain team aspects. That includes things like improving player skills, trading players at the end of the season, and putting the emphasis on draft.
AFL Premiership 2007
Developer: IR Gurus
Publisher: Sony Computer Entertainment
Released for: PlayStation 2
Release date: 2007
It is a simulation game for the PlayStation 2 based on the AFL. The game marks the final AFL game to be developed by Australian games company IR Gurus and was published by Sony Computer Entertainment, IR Gurus seventh collaboration in the series, and was released on 28 June 2007. The game includes all 16 teams, more than 600 AFL players with updated stats and all of the major stadium. Game modes in AFL Premiership 2007 are Single Match, Season Mode, Career Mode, Mission Mode and Training Mode. It was a follow-up to AFL Premiership 2006.
AFL Challenge
Developer: Wicked Witch Software
Publisher: Tru Blu Entertainment, Sony Computer Entertainment
Released for: PlayStation Portable
Release date: 2009
It was released for the PlayStation Portable. The game was developed by Wicked Witch Software and co-published by Tru Blu Entertainment and Sony Computer Entertainment. It was released on 10 September 2009. The game is based on the 2009 AFL season and includes all 16 teams and players.
AFL Live
Developer: Big Ant Studios
Publisher: Tru Blu Entertainment
Released for: Microsoft Windows, PlayStation 3, Xbox 360
Release date: 2011, 2012
It was released for Microsoft Windows, PlayStation 3 and Xbox 360 based on the 2011 AFL season. It was developed by Big Ant Studios and released on 21 April 2011. The Game of the Year Edition, an updated version of the game for the 2012 AFL season was released on 6 June 2012.
AFL (2011)
Developer: Wicked Witch Software
Publisher: Tru Blu Entertainment
Released for: Wii
Release date: 19 May 2011.
It was released for Wii the same year as AFL Live, based on the 2011 AFL season. It features more management mechanics than Live, with a ten year campaign, as well as multiplayer of up to 8 players. As with the other systems, a Game of the Year edition with 2012 players and locales was again released in June 2012.
AFL Live 2
Developer: Wicked Witch Software
Publisher: Tru Blu Entertainment
Released for: PlayStation 3, Xbox 360, iOS, Android
Release date: 2013, 2014, 2015
It was released for PlayStation 3 and Xbox 360 on 12 September 2013. The 2014 Season Pack was released on 30 June 2014 for Xbox 360 and PlayStation 3 on 9 July 2014. A mobile port was released on iOS on 28 May 2015 and Android on 26 September 2015.
AFL Evolution
Developer: Wicked Witch Software
Publisher: Tru Blu Entertainment
Released for: Microsoft Windows, PlayStation 4, Xbox One
Release date: 2017, 2018
It was released for Microsoft Windows, PlayStation 4 and Xbox One. It was developed by Wicked Witch Software and was released on May 5, 2017 for PlayStation 4 and Xbox One, with the Microsoft Windows version released on July 21, 2017 via Steam. The 2018 Season Pack was later released on May 3, 2018.
AFL Evolution 2
Developer: Wicked Witch Software
Publisher: Tru Blu Entertainment
Released for: Microsoft Windows, PlayStation 4, Xbox One, Nintendo Switch
Release date: April 16, 2020 (PlayStation 4 and Xbox One). May 14, 2020 (Nintendo Switch). September 11, 2020 (Microsoft Windows).
It will be released for Microsoft Windows, PlayStation 4, Xbox One and Nintendo Switch.
Other titles
AFL Mascot Manor
It was released for the Nintendo DS on 2 July 2009. Focused more on the League's Mascots then on the sport itself, the central component of the game is the adventure the players Mascot will experience in the themed worlds.
AFL: Gold Edition
Is an iOS AFL simulation video game based on the 2011 AFL season, released on 14 December 2011. The 2012 AFL season update was released on 4 June 2012. It was developed by Wicked Witch Software and had similar gameplay to AFL on the Wii.
References
1996 video games
Australia-exclusive video games
Australian rules football video games
Video games developed in Australia
Video games scored by Jeff van Dyck
Video games set in Australia
Articles which contain graphical timelines
Windows games
Windows-only games
PlayStation 2 games
Xbox games
PlayStation (console) games |
1120214 | https://en.wikipedia.org/wiki/Cloop | Cloop | The compressed loop device (cloop) is a module for the Linux kernel. It adds support for transparently decompressed, read-only block devices. It is not a compressed file system: cloop is mostly used as a convenient way to compress conventional file systems onto Live CDs.
Cloop was originally written for the Levanta Bootable Business Card by Rusty Russell, but is now maintained by Klaus Knopper, the author of Knoppix.
A compression ratio of about 2.5:1 is common for software. The Knoppix cloop image, for example, is 700 MB compressed and around 1.8 GB uncompressed.
Design
cloop images contain:
A shell script (with mount commands for the image)
A header with the number of blocks and the uncompressed block size
A seek index with compressed and uncompressed block sizes in pairs
zlib-compressed data blocks, packed end-to-end
The data blocks are compressed separately; this makes it possible to seek to individual blocks without having to decompress the entire image from the start, but at the cost of slightly reducing the compression ratio. Live CD images typically use a block size of 256 KB as a compromise between decompression speed and space-efficiency.
Apple uses a similar file format in the compressed variant of its DMG disk images.
Limitations
The design of the cloop driver requires that compressed blocks be read whole from disk. This makes cloop access inherently slower when there are many scattered reads, which can happen if the system is low on memory or when a large program with many shared libraries is starting. A big issue is the seek time for CD-ROM drives (~80 ms), which exceeds that of hard disks (~10 ms) by a large factor. On the other hand, because files are packed together, reading a compressed block may thus bring in more than one file into the cache. The effects of tail packing are known to improve seek times (cf. reiserfs, btrfs), especially for small files. Some performance tests related to cloop have been conducted.
See also
loop device
Cramfs
SquashFS
e2compr
References
External links
cloop sources against the mainline Linux kernels and a patch to support any known cloop format. Note: versions 0.xx are for kernel 2.2; 1.xx are for kernel 2.4; 2.xx are for kernel 2.4 and 2.6.
cloop at Knoppix Linux Wiki (installation instructions are here)
Slides from a LinuxTag presentation by Klaus Knopper on the implementation of cloop (in German).
A fuse driver for cloop with a patch (description) to support any known cloop format and the binary.
Knoppix
Compression file systems
Third-party Linux kernel modules |
423331 | https://en.wikipedia.org/wiki/IDEF | IDEF | IDEF, initially an abbreviation of ICAM Definition and renamed in 1999 as Integration Definition, is a family of modeling languages in the field of systems and software engineering. They cover a wide range of uses from functional modeling to data, simulation, object-oriented analysis and design, and knowledge acquisition. These definition languages were developed under funding from U.S. Air Force and, although still most commonly used by them and other military and United States Department of Defense (DoD) agencies, are in the public domain.
The most-widely recognized and used components of the IDEF family are IDEF0, a functional modeling language building on SADT, and IDEF1X, which addresses information models and database design issues.
Overview of IDEF methods
IDEF refers to a family of modeling language, which cover a wide range of uses, from functional modeling to data, simulation, object-oriented analysis/design and knowledge acquisition. Eventually the IDEF methods have been defined up to IDEF14:
IDEF0 : Function modeling
IDEF1 : Information modeling
IDEF1X : Data modeling
IDEF2 : Simulation model design
IDEF3 : Process description capture
IDEF4 : Object-oriented design
IDEF5 : Ontology description capture
IDEF6 : Design rationale capture
IDEF7 : Information system auditing
IDEF8 : User interface modeling
IDEF9 : Business constraint discovery
IDEF10 : Implementation architecture modeling
IDEF11 : Information artifact modeling
IDEF12 : Organization modeling
IDEF13 : Three schema mapping design
IDEF14 : Network design
In 1995 only the IDEF0, IDEF1X, IDEF2, IDEF3 and IDEF4 had been developed in full. Some of the other IDEF concepts had some preliminary design. Some of the last efforts were new IDEF developments in 1995 toward establishing reliable methods for business constraint discovery IDEF9, design rationale capture IDEF6, human system, interaction design IDEF8, and network design IDEF14.
The methods IDEF7, IDEF10, IDEF11, IDEF 12 and IDEF13 haven't been developed any further than their initial definition.
History
IDEF originally stood for ICAM Definition, initiated in the 1970s at the US Air Force Materials Laboratory, Wright-Patterson Air Force Base in Ohio by Dennis E. Wisnosky, Dan L. Shunk, and others. and completed in the 1980s. IDEF was a product of the ICAM initiative of the United States Air Force. The IEEE recast the IDEF abbreviation as Integration Definition."
The specific projects that produced IDEF were ICAM project priorities 111 and 112 (later renumbered 1102). The subsequent Integrated Information Support System (IISS) project priorities 6201, 6202, and 6203 attempted to create an information processing environment that could be run in heterogeneous physical computing environments. Further development of IDEF occurred under those projects as a result of the experience gained from applications of the new modeling techniques. The intent of the IISS efforts was to create 'generic subsystems' that could be used by a large number of collaborating enterprises, such as U.S. defense contractors and the armed forces of friendly nations.
At the time of the ICAM 1102 effort there were numerous, mostly incompatible, data model methods for storing computer data — sequential (VSAM), hierarchical (IMS), network (Cincom's TOTAL and CODASYL, and Cullinet's IDMS). The relational data model was just emerging as a promising way of thinking about structuring data for easy, efficient, and accurate access. Relational database management systems had not yet emerged as a general standard for data management.
The ICAM program office deemed it valuable to create a "neutral" way of describing the data content of large-scale systems. The emerging academic literature suggested that methods were needed to process data independently of the way it was physically stored. Thus the IDEF1 language was created to allow a neutral description of data structures that could be applied regardless of the storage method or file access method.
IDEF1 was developed under ICAM program priority 1102 by Robert R. Brown of the Hughes Aircraft Company, under contract to SofTech, Inc. Brown had previously been responsible for the development of IMS while working at Rockwell International. Rockwell chose not to pursue IMS as a marketable product but IBM, which had served as a support contractor during development, subsequently took over the product and was successful in further developing it for market. Brown credits his Hughes colleague Timothy Ramey as the inventor of IDEF1 as a viable formalism for modeling information structures. The two Hughes researchers built on ideas from and interactions with many luminaries in the field at the time. In particular, IDEF1 draws on the following techniques:
the evolving natural language information model (ENALIM) technique of G. M. Nijssen (Control Data Corporation) — this technique is now more widely known as NIAM or the object-role model ORM;
the network data structures technique, popularly called the CODASYL approach, of Charles Bachman (Honeywell Information Systems);
the hierarchical data management technique, implemented in IBM's IMS data management system, developed by R. R. Brown (Rockwell International);
the relational approach to data of E. F. Codd (IBM);
The entity-relationship approach (E-R) of Peter Chen (UCLA).
The effort to develop IDEF1 resulted in both a new method for information modeling and an example of its use in the form of a "reference information model of manufacturing." This latter artifact was developed by D. S. Coleman of the D. Appleton Company (DACOM) acting as a sub-contractor to Hughes and under the direction of Ramey. Personnel at DACOM became expert at IDEF1 modeling and subsequently produced a training course and accompanying materials for the IDEF1 modeling technique.
Experience with IDEF1 revealed that the translation of information requirements into database designs was more difficult than had originally been anticipated. The most beneficial value of the IDEF1 information modeling technique was its ability to represent data independent of how those data were to be stored and used. It provided data modelers and data analysts with a way to represent data requirements during the requirements-gathering process. This allowed designers to decide which DBMS to use after the nature of the data requirements was understood and thus reduced the "misfit" between data requirements and the capabilities and limitations of the DBMS. The translation of IDEF1 models to database designs, however, proved to be difficult.
The IDEF modeling languages
IDEF0
The IDEF0 functional modeling method is designed to model the decisions, actions, and activities of an organization or system. It was derived from the established graphic modeling language structured analysis and design technique (SADT) developed by Douglas T. Ross and SofTech, Inc. In its original form, IDEF0 includes both a definition of a graphical modeling language (syntax and semantics) and a description of a comprehensive methodology for developing models. The US Air Force commissioned the SADT developers to develop a function model method for analyzing and communicating the functional perspective of a system. IDEF0 should assist in organizing system analysis and promote effective communication between the analyst and the customer through simplified graphical devices.
IDEF1X
To satisfy the data modeling enhancement requirements that were identified in the IISS-6202 project, a sub-contractor, DACOM, obtained a license to the logical database design technique (LDDT) and its supporting software (ADAM). LDDT had been developed in 1982 by Robert G. Brown of The Database Design Group entirely outside the IDEF program and with no knowledge of IDEF1. LDDT combined elements of the relational data model, the E-R model, and generalization in a way specifically intended to support data modeling and the transformation of the data models into database designs. The graphic syntax of LDDT differed from that of IDEF1 and, more importantly, LDDT contained interrelated modeling concepts not present in IDEF1. Mary E. Loomis wrote a concise summary of the syntax and semantics of a substantial subset of LDDT, using terminology compatible with IDEF1 wherever possible. DACOM labeled the result IDEF1X and supplied it to the ICAM program.
Because the IDEF program was funded by the government, the techniques are in the public domain. In addition to the ADAM software, sold by DACOM under the name Leverage, a number of CASE tools use IDEF1X as their representation technique for data modeling.
The IISS projects actually produced working prototypes of an information processing environment that would run in heterogeneous computing environments. Current advancements in such techniques as Java and JDBC are now achieving the goals of ubiquity and versatility across computing environments which was first demonstrated by IISS.
IDEF2 and IDEF3
The third IDEF (IDEF2) was originally intended as a user interface modeling method. However, since the Integrated Computer-Aided Manufacturing (ICAM) program needed a simulation modeling tool, the resulting IDEF2 was a method for representing the time varying behavior of resources in a manufacturing system, providing a framework for specification of math model based simulations. It was the intent of the methodology program within ICAM to rectify this situation but limitation of funding did not allow this to happen. As a result, the lack of a method which would support the structuring of descriptions of the user view of a system has been a major shortcoming of the IDEF system. The basic problem from a methodology point of view is the need to distinguish between a description of what a system (existing or proposed) is supposed to do and a representative simulation model that predicts what a system will do. The latter was the focus of IDEF2, the former is the focus of IDEF3.
IDEF4
The development of IDEF4 came from the recognition that the modularity, maintainability and code reusability that results from the object-oriented programming paradigm can be realized in traditional data processing applications. The proven ability of the object-oriented programming paradigm to support data level integration in large complex distributed systems is also a major factor in the widespread interest in this technology from the traditional data processing community.
IDEF4 was developed as a design tool for software designers who use object-oriented languages such as the Common Lisp Object System, Flavors, Smalltalk, Objective-C, C++, and others. Since effective usage of the object-oriented paradigm requires a different thought process than used with conventional procedural or database languages, standard methodologies such as structure charts, data flow diagrams, and traditional data design models (hierarchical, relational, and network) are not sufficient. IDEF4 seeks to provide the necessary facilities to support the object-oriented design decision making process.
IDEF5
IDEF5, or integrated definition for ontology description capture method, is a software engineering method to develop and maintain usable, accurate, domain ontologies. In the field of computer science ontologies are used to capture the concept and objects in a specific domain, along with associated relationships and meanings. In addition, ontology capture helps coordinate projects by standardizing terminology and creates opportunities for information reuse. The IDEF5 Ontology Capture Method has been developed to reliably construct ontologies in a way that closely reflects human understanding of the specific domain.
In the IDEF5 method, an ontology is constructed by capturing the content of certain assertions about real-world objects, their properties and their interrelationships, and representing that content in an intuitive and natural form. The IDEF5 method has three main components: A graphical language to support conceptual ontology analysis, a structured text language for detailed ontology characterization, and a systematic procedure that provides guidelines for effective ontology capture.
IDEF6
IDEF6, or integrated definition for design rationale capture, is a method to facilitate the acquisition, representation, and manipulation of the design rationale used in the development of enterprise systems. Rationale is the reason, justification, underlying motivation, or excuse that moved the designer to select a particular strategy or design feature. More simply, rationale is interpreted as the answer to the question, “Why is this design being done in this manner?” Most design methods focus on what the design is (i.e. on the final product, rather than why the design is the way it is).
IDEF6 is a method that possesses the conceptual resources and linguistic capabilities needed
to represent the nature and structure of the information that constitutes design rationale within a given system, and
to associate that rationale with design specifications, models, and documentation for the system.
IDEF6 is applicable to all phases of the information system development process, from initial conceptualization through both preliminary and detailed design activities. To the extent that detailed design decisions for software systems are relegated to the coding phase, the IDEF6 technique should be usable during the software construction process as well.
IDEF8
IDEF8, or integrated definition for human-system interaction design, is a method for producing high-quality designs of interactions between users and the systems they operate. Systems are characterized as a collection of objects that perform functions to accomplish a particular goal. The system with which the user interacts can be any system, not necessarily a computer program. Human-system interactions are designed at three levels of specification within the IDEF8 method. The first level defines the philosophy of system operation and produces a set of models and textual descriptions of overall system processes. The second level of design specifies role-centered scenarios of system use. The third level of IDEF8 design is for human-system design detailing. At this level of design, IDEF8 provides a library of metaphors to help users and designers specify the desired behavior in terms of other objects whose behavior is more familiar. Metaphors provide a model of abstract concepts in terms of familiar, concrete objects and experiences.
IDEF9
IDEF9, or integrated definition for business constraint discovery, is designed to assist in the discovery and analysis of constraints in a business system. A primary motivation driving the development of IDEF9 was an acknowledgment that the collection of constraints that forge an enterprise system is generally poorly defined. The knowledge of what constraints exist and how those constraints interact is incomplete, disjoint, distributed, and often completely unknown. Just as living organisms do not need to be aware of the genetic or autonomous constraints that govern certain behaviors, organizations can (and most do) perform well without explicit knowledge of the glue that structures the system. In order to modify business in a predictable manner, however, the knowledge of these constraints is as critical as knowledge of genetics is to the genetic engineer.
IDEF14
IDEF14, or integrated definition for network design method, is a method that targets the modeling and design of computer and communication networks. It can be used to model existing ("as is") or envisioned ("to be") networks. It helps the network designer to investigate potential network designs and to document design rationale. The fundamental goals of the IDEF14 research project developed from a perceived need for good network designs that can be implemented quickly and accurately.
References
Further reading
Ovidiu S. Noran (2000). Business Modelling: UML vs. IDEF Paper Griffith University
External links
Integrated DEFinition Methods
Data Modeling
The IDEF Process Modeling Methodology by Robert P. Hanrahan 1995
Data modeling
Standards
Systems engineering
Modeling languages |
16864378 | https://en.wikipedia.org/wiki/WarpOS | WarpOS | WarpOS is a multitasking kernel for the PowerPC (PPC) architecture central processing unit (CPU) developed by Haage & Partner for the Amiga computer platform in the late 1990s and early 2000s. It runs on PowerUP accelerator boards developed by phase5 which contains both a Motorola 68000 series CPU and a PowerPC CPU with shared address space. WarpOS runs alongside the 68k-based AmigaOS, which can use the PowerPC as a coprocessor. Despite its name, it is not an operating system (OS), but a kernel; it supplies a limited set of functions similar to those in AmigaOS for using the PowerPC. When released, its original name was WarpUP, but was changed to reflect its greater feature set, and possibly to avoid comparison with its competitor, PowerUP.
It was developed by Sam Jordan using 680x0 and PowerPC assembly language. It was distributed free of charge.
History
In 1997, Phase5, an Amiga hardware manufacturer, launched their range of PowerPC (PPC) accelerators for the Amiga. Because AmigaOS was not yet PowerPC native, as a stopgap measure the PowerUP boards were dual-processor boards, incorporating the PPC and a 68K processor (68LC040, 68040 at 25 MHz or 68060 at 50 MHz). They carried the PowerUP kernel on board in an erasable programmable read-only memory (EPROM), a similar kernel designed to allow AmigaOS application software to use both PPC and 68k applications through an application programming interface (API) library named . AmigaOS still required a 68K processor, while the PPC was in effect used as an extremely fast coprocessor that carried out specific instructions.
This causes significant slowdown when the OS task switches between the 68K and PPC (a context switch), because CPU caches must be flushed to maintain memory integrity. The more CPU switches occur in an application, the more the slowdown, often so much that it was pointless to use the PPC processor, being slower than the 68k native binary. The main workaround for this was to simply avoid as many 68k OS calls as possible, or to group them together, but it was difficult and time-consuming for developers to do this.
WarpOS was launched as a controversial alternative to Phase5's PowerUP kernel, but eventually became the most used and nominally the standard PPC kernel on AmigaOS.
WarpUP
WarpUP is a high-speed kernel for PowerPC versions of Amiga.
WarpUP forms a hardware abstraction layer between the hardware and software, and ensures that the applications function correctly on PowerPC architecture. It also forms an interface between PowerPC driven hardware, and 68k compliant software, which allows the optimal exploitation of the speed of the PowerPC CPU, while making the porting of 68k applications as easy as possible.
Several advantages that WarpUP claims to offer are:
High speed communication between 68k programs and PowerPC CPUs
Native multi-tasking, memory management, semaphores, list and tag management, signalling and message handling
Memory protection (tasks are allowed to allocate to protected memory areas if need be)
Virtual Signals (signals are shared between CPUs and will always be redirected to the correct CPU when needed)
Inter-CPU messaging system (messages are passed between the CPUs when needed)
Optimal use of the PowerPC memory management unit and the PowerPC CPU cache
Memory management unit and exception-handling support for applications
PowerSave function that turns the PowerPC off if no applications are using it
PowerPC Enforcer, protects the first page of memory
A detailed crash requester that provides detailed information to help developers locate errors
Integrated debugging system that enables bug tracking easier
Specific support for highly optimized software such as games and demos
Support for Amiga-compliant applications
Libraries for PowerPC native, mixed, and fat binary applications
WarpUP is also usable for alternative developer systems such as Modula- or E-compilers with PowerPC support. This is because objects are not mandatorily created in Executable and Linkable Format (ELF). Instead, the Amiga compliant hunk format can be used also.
Easy to install
Hardware independent
Features
WarpOS had similar features to PowerUP, but with some major differences. Most pertinently, it used the PowerOpen application binary interface (ABI), in contrast to PowerUP which used the newer and better supported UNIX System V (SysV), which ensured both kernels could not be directly compatible.
From version 14, the WarpOS kernel used a slightly different multitasking scheduler than AmigaOS (or PowerUP), based on that in Unix systems with "nice" values, and priorities for its own tasks and processes. This was meant to ensure that all tasks got CPU time, and weren't starved of CPU time by compute-intensive tasks (as was the case with the original AmigaOS scheduler). However, this was ineffective as it was still limited by the native AmigaOS scheduler and it did create extra difficulties synchronising with the 68k side (particularly for sound). In version 15 WarpOS introduced a concept called atomic tasks. Such tasks are noninterruptible, and scheduling does not take place unless the task explicitly allows to do so.
WarpOS also had an inbuilt debugger which could be sent to dump information on any crashed tasks to either console window on screen or to serial, depending on environment variables.
One of the most lauded features of WarpOS was that it continued the Amiga Hunk format of original Amiga executables in a variant format named Extended Hunk format (EHF), and implemented the hunk type named HUNK_PPC_CODE. This allowed AmigaOS to transparently handle WarpOS executables without needing to patch the OS to recognise them, which PowerUP did need to do to run its ELF file format. While elegant in theory, the EHF format's downfall was its lack of widespread compiler support (especially GNU Compiler Collection (GCC)), and the ELF file format was adopted by AmigaOS 4 and MorphOS.
Unlike PowerUP, WarpOS could also produce mixed (fat) binaries with both 68k and PPC code, which could run on both Amiga PPC boards and ordinary Amiga systems. This practice was very rare due to the programming complexity of doing so, but the picture datatype in AmigaOS 3.9 (a shared library that loaded, processed and dithered pictures through the AmigaOS datatypes system) was a notable example of its use. PPC equipped systems would notice an immediate large speed-up, while 68k systems and emulators would still be compatible without having crashing or installing another binary.
WarpOS had two housekeeping tasks named Defiant and Babylon5, thought to be named after the USS Defiant from Star Trek DS9 and Babylon 5, its developers being science fiction fans. These would often be reported by new users who did not know what they were appearing in tasks lists.
Controversy
Haage & Partner, an Amiga software and hardware manufacturer (which also created AmigaOS 3.9), developed a competing kernel to PowerUP named WarpUP, which they claimed would work around the context switching problem, a claim which would be bitterly challenged by Phase5. Phase5 claimed correctly that this hardware problem could not be circumvented by simply optimising the kernel and was a limitation inherent to the almost unique board design, which shared the memory bus between two CPUs of radically different families. WarpOS versions up to V7 were wrappers added around Phase5's PowerUP kernel but starting from version 8 it was its own PPC kernel running alongside AmigaOS and was renamed WarpOS.
As PowerUP was on the EPROM of the boards and Phase5 could not run at the same time with WarpOS, it had to be deactivated by a small software tool. As H&P did not have access to the EPROM, the tool had to make assumptions about the PowerUP kernel and naturally this broke in updated versions. This led to open accusations by WarpOS advocates and by the author, Sam Jordan, that Phase5 were intentionally trying to prevent WarpOS running on their boards. Phase5 also claimed that Haage & Partner abused a free developer board gifted to them to launch this competing kernel (although free, WarpOS was supported almost exclusively by H&P's commercial StormC++ compiler), and that they had reverse-engineered PowerUP to do so. H&P pointed out that it was unavoidable as long Phase5 refused to allow users to choose what kernel to put on the board EPROM, claiming that the PowerUP kernel was essential for initialising the boards on boot and erasing them would simply render the boards useless.
Worse still, users were originally only able to run one of these kernels, resulting in much duplication of effort between competing developers determined to use one or the other, often with two versions of software being developed independently. Despite there being little or no real difference in performance, debugging capability, usability or stability in either system, and it had become patently clear that neither could hope to work around the hardware context switch issue, a series of claims were made on each side and much fighting in Usenet followed.
This resulted in a great number of hurriedly ported, often semi-functional ports of open source software from Windows, often just to "one up" the other side. Steffen Haeuser (who had gained notoriety by declaring, "ELF is a monster !!!", referring to the ELF fileformat) of Hyperion Entertainment CVBA was particularly infamous for his "political" ports being so rushed that they lacked sound or were very unstable, being released just to make up the numbers and produce a list of software greater than that of PowerUP.
The impasse between the competing systems was eventually ended by a PowerUP wrapper for WarpOS by Franke Wille, which allowed running PowerUP software on WarpOS systems.
The bitter infighting in the Amiga community over the two kernels, while brief, produced a rift that eventually culminated in a split between AmigaOS and MorphOS, with most WarpOS and PowerUP developers switching either new AmigaOS implementation respectively.
WarpOS was intended to be used as a basis for AmigaOS 4 but Haage & Partner dropped the project when their AmigaOS 4 PPC contract was cancelled by Amiga, Inc. in 2000. When Hyperion Entertainment took over the project, they originally had the same idea, but it was later admitted by their developers that it was of very little use in modernising the OS, being written wholly in non-annotated machine code assembly language.
The choice of WarpOS over its rival proved to be a Pyrrhic victory, as the standards it had developed around, namely EHF and PowerOpen, were to be wholly abandoned in later development of AmigaOS and its clones. The dual CPU model did not recur.
Legacy support in other operating systems
AmigaOS 4
A wrapper was made for AmigaOS 4.0 & 4.1, first it was included, then it was distributed by GuruMedation team, (not to be confused by Amiga's "Blue" Screen of death that also has the same name). This wrapper supported PowerPC 603e, 604e, AMCC440EP, G3 and G4 CPU's. But failed to work on AMCC460 and P.A.Semi PA6T,
Work is under way to make new wrapper named ReWarp. A group named Sakura is responsible for the new wrapper.
MorphOS
MorphOS also uses a wrapper to run WarpUP programs, they also have a wrapper for PowerUP, a WarpOS competitor.
Games for WarpOS
CrossFire II
Descent: FreeSpace – The Great War
Game was first released on WarpOS, then ported to AmigaOS4.0
ADoomPPC
Original title: DOOM
Earth 2140
AmiHeretic
Heretic II
(Only for WarpOS, not for AmigaOS)
WarpHexen
Original title: Hexen: Beyond_Heretic
(Same game named UHexen for AmigaOS4)
Nightlong: Union City Conspiracy
Payback
AmiQuake
The Feeble Files
Quake II
Game was first released on WarpOS, then ported to AmigaOS4.0
Shogo: Mobile Armor Division
(Only for WarpOS, not for AmigaOS)
Wipeout 2097
(Only for WarpOS, not for AmigaOS)
Demos for WarpOS
PPC/Warp3D – demo by CdBS Software; 2nd at UkonxParty 2000
V1.0 Demo – PPC/Warp3D by CdBS Software
DeathTrial – FixPatch 0.1
MusicDisk – Earth-Tribe-Media
One Day Miracle – by Fit ASM'02 64k intro
Booring Trip PPC – for UkonxParty 4 in France
Greuh!Zillement Beta – 2nd @ LTP4
Salvation – PPC dentro, by Horizontal Lamerz
Flow – Winner 64kb at FuckYanica One
Megademo IV – Quick PPC port
DeathTrial – by Mkd:AGA/CGXwarposPPCAhi+dbplayer
Equinoxe demoparty invitation
PRO_GEAR_SPEC – WarpOS PPC demo by mankind
Mankind MesaGLUT – wos+ahi surreal demo
212 – by Madwizards; 1st at Delirium 2001
Amsterdam Blessings – by Madwizards; 3rd at M/S 2001
Cull Bazaar – by Madwizards; 11th at Assembly 2001
Nuance "Subtle Shades 2" – 5th place at MS2K+1
4th place at MS99 by NUANCE
NoSyncIzBack! – WOS demo 3rd at IGDRP 2.
Planet Potion – A 64 kB Intro by Potion
Suicidal – A 64 kB Intro by Potion
Sayontsheck – PPC AGA Demo by Lamers
Luminance – PPC WOS v1.1 – UKONX; 1st at Slach 2 1999
NoSync – by Universe – WOS demo 3rd at Equinoxe 2003
PowerUp – by Universe: Winner WOS demo at Slash 2001
Everything Dies – by Venus Art, PPC WarpUP version
Ghost – by Venus Art, PPC WarpUP version
Emulators for WarpOS
IFusion, FusionPPC – MacOS 8/9 emulator
WarpSNES
Programs for WarpOS
Frogger – Video player fxpaint
perfectpaint
wosdb – simple debugger''
See also
Amiga exec
References
Notes
Warpsness problems :( Steffen Haeuser explains WarpUp kernel at comp.sys.amiga.games
See also pages regarding history of the PPC processor on Amiga at Amiga.History site.
EHF specifications on Haage&Partners site.
BlizzardPPC Flash Why WarpOS and Warp3D have problems with Blizzard PPC
Dietrich, Wolf; Amiga Report Magazine Haage and Partner Announce WarpUP, Phase5 Blasts H&P
comp.sys.amiga.games Steffen Haeuser comments ELF
ppclibemu ppc.library emulation under WarpOS
List of software projects of Sam Jordan
Interview with Ben Hermans from Hyperion Benjamin Hermans comments WarpOS
Jordan, S: powerpc.library/WarpOS history. 2001
Further reading
Amiga
AmigaOS
MorphOS
Operating system kernels
Microkernels |
316497 | https://en.wikipedia.org/wiki/Virtual%20Storage%20Access%20Method | Virtual Storage Access Method | Virtual Storage Access Method (VSAM) is an IBM DASD file storage access method, first used in the OS/VS1, OS/VS2 Release 1 (SVS) and Release 2 (MVS) operating systems, later used throughout the Multiple Virtual Storage (MVS) architecture and now in z/OS. Originally a record-oriented filesystem, VSAM comprises four data set organizations: key-sequenced (KSDS), relative record (RRDS), entry-sequenced (ESDS) and linear (LDS). The KSDS, RRDS and ESDS organizations contain records, while the LDS organization (added later to VSAM) simply contains a sequence of pages with no intrinsic record structure, for use as a memory-mapped file.
Overview
An IBM Redbook named "VSAM PRIMER" (especially when used with the "Virtual Storage Access Method (VSAM) Options for Advanced Applications" manual) explains the concepts needed to make use of VSAM. IBM uses the term data set in official documentation as a synonym of file, and direct access storage device (DASD) because it supported other devices similar to disk drives.
VSAM records can be of fixed or variable length. They are organised in fixed-size blocks called Control Intervals (CIs), and then into larger divisions called Control Areas (CAs). Control Interval sizes are measured in bytes for example 4 kilobytes while Control Area sizes are measured in disk tracks or cylinders. Control Intervals are the units of transfer between disk and computer so a read request will read one complete Control Interval. Control Areas are the units of allocation so, when a VSAM data set is defined, an integral number of Control Areas will be allocated.
The Access Method Services utility program IDCAMS is commonly used to manipulate ("delete and define") VSAM data sets. Custom programs can access VSAM datasets through Data Definition (DD) statements in Job Control Language (JCL), via dynamic allocation or in online regions such as in Customer Information Control System (CICS).
Both IMS/DB and DB2 are implemented on top of VSAM and use its underlying data structures.
VSAM files
The physical organization of VSAM data sets differs considerably from the organizations used by other access methods, as follows.
A VSAM file is defined as a cluster of VSAM components, e.g., for KSDS a DATA component and an INDEX component.
Control Intervals and Control Areas
VSAM components consist of fixed length physical blocks grouped into fixed length control intervals (CI) and control areas (CA). The size of the CI and CA is determined by the Access Method Services (AMS), and the way in which they are used is normally not visible to the user. There will be a fixed number of control intervals in each control area.
A control interval normally contains multiple records. The records are stored within the control interval starting from the low address upwards. Control information is stored at the other end of the control interval, starting from the high address and moving downwards. The space between the records and the control information is free space. The control information comprises two types of entry: a control interval descriptor field (CIDF) which is always present, and record descriptor fields (RDF) which are present when there are records within the control interval and describe the length of the associated record. Free space within a CI is always contiguous.
When records are inserted into a control interval, they are placed in the correct order relative to other records. This may require records to be moved out of the way inside the control interval. Conversely, when a record is deleted, later records are moved down so that the free space remains contiguous. If there is not enough free space in a control interval for a record to be inserted, the control interval is split. Roughly half the records are stored in the original control interval while the remaining records are moved into a new control interval. The new control interval is taken from a pool of free control intervals within the same control area as the original control interval. If there is no remaining free control interval within that control area, the control area itself is split and the control intervals are distributed equally between the old and the new control areas.
You can use three types of record-orientated file organization with VSAM (the contents of linear data sets have no record structure):
Sequential VSAM organization
An ESDS may have an index defined to it to enable access via keys, by defining an Alternate Index. Records in ESDS are stored in order in which they are written by address access. Records are loaded irrespective of their contents and their byte addresses cannot be changed.
Indexed VSAM organization
A KSDS has two parts: the index component and the data component. These may be stored on separate disk volumes.
While a basic KSDS only has one key (the primary key), alternate indices may be defined to permit the use of additional fields as secondary keys. An Alternate Index (AIX) is itself a KSDS.
The data structure used by a KSDS is nowadays known as a B+ tree.
Relative VSAM organization
An RRDS may have an index defined to it to enable access via keys, by defining an Alternate Index.
Linear VSAM organization
An LDS is an unstructured VSAM dataset with a control interval size of a multiple of 4K. It is used by certain system services.
VSAM Data Access Techniques
There are four types of access techniques for VSAM data:
Local Shared Resources (LSR), is optimised for "random" or direct access. LSR access is easy to achieve from CICS.
Global Shared Resources (GSR)
Non-Shared Resources (NSR), which is optimised for sequential access. NSR access has historically been easier to use than LSR for batch programs.
Distributed File Management (DFM), an implementation of a Distributed Data Management Architecture server, enables programs on remote computers to create, manage, and access VSAM files.
Sharing VSAM data
Sharing of VSAM data between CICS regions can be done by VSAM Record-Level Sharing (RLS). This adds record caching and, more importantly, record locking. Logging and commit processing remain the responsibility of CICS which means that sharing of VSAM data outside a CICS environment is severely restricted.
Sharing between CICS regions and batch jobs requires Transactional VSAM, DFSMStvs. This is an optional program that builds on VSAM RLS by adding logging and two-phase commit, using underlying z/OS system services. This permits generalised sharing of VSAM data.
History
VSAM was introduced as a replacement for older access methods and was intended to add function, to be easier to use and to overcome problems of performance and device-dependence. VSAM was introduced in the 1970s when IBM announced virtual storage operating systems (DOS/VS, OS/VS1 and OS/VS2) for its new System/370 series, as successors of the DOS/360 and OS/360 operating systems running on its System/360 computer series. While backwards compatibility was maintained, the older access methods suffered from performance problems due to the address translation required for virtual storage.
The KSDS organization was designed to replace ISAM, the Indexed Sequential Access Method. Changes in disk technology had meant that searching for data in ISAM data sets had become very inefficient. It was also difficult to move ISAM data sets as there were embedded pointers to physical disk locations which became invalid if the data set was moved. IBM also provided a compatibility interface to allow programs coded to use ISAM to use a KSDS instead.
The RRDS organization was designed to replace BDAM, the Basic Direct Access Method. In some cases, BDAM data sets contained embedded pointers which prevented them from being moved. However, most BDAM data sets did not and the incentive to move from BDAM to VSAM RRDS was much less compelling than that to move from ISAM to VSAM KSDS.
Linear data sets were added later, followed by VSAM RLS and then Transactional VSAM.
See also
Job Control Language (JCL)
IBM mainframe utility programs
ISAM
Geneva ERS
Record Management Services, a similar system developed by Digital Equipment Corporation
Notes
References
VSAM Demystified
DFSMStvs Overview and Planning Guide
IBM mainframe operating systems
IBM file systems
Computer file formats |
2058995 | https://en.wikipedia.org/wiki/Scientific%20community%20metaphor | Scientific community metaphor | In computer science, the scientific community metaphor is a metaphor used to aid understanding scientific communities. The first publications on the scientific community metaphor in 1981 and 1982 involved the development of a programming language named Ether that invoked procedural plans to process goals and assertions concurrently by dynamically creating new rules during program execution. Ether also addressed issues of conflict and contradiction with multiple sources of knowledge and multiple viewpoints.
Development
The scientific community metaphor builds on the philosophy, history and sociology of science. It was originally developed building on work in the philosophy of science by Karl Popper and Imre Lakatos. In particular, it initially made use of Lakatos' work on proofs and refutations. Subsequently, development has been influenced by the work of Geof Bowker, Michel Callon, Paul Feyerabend, Elihu M. Gerson, Bruno Latour, John Law, Karl Popper, Susan Leigh Star, Anselm Strauss, and Lucy Suchman.
In particular Latour's Science in Action had great influence. In the book, Janus figures make paradoxical statements about scientific development. An important challenge for the scientific community metaphor is to reconcile these paradoxical statements.
Qualities of scientific research
Scientific research depends critically on monotonicity, concurrency, commutativity, and pluralism to propose, modify, support, and oppose scientific methods, practices, and theories.
Quoting from Carl Hewitt, scientific community metaphor systems have characteristics of monotonicity, concurrency, commutativity, pluralism, skepticism and provenance.
monotonicity: Once something is published it cannot be undone. Scientists publish their results so they are available to all. Published work is collected and indexed in libraries. Scientists who change their mind can publish later articles contradicting earlier ones.
concurrency: Scientists can work concurrently, overlapping in time and interacting with each other.
commutativity: Publications can be read regardless of whether they initiate new research or become relevant to ongoing research. Scientists who become interested in a scientific question typically make an effort to find out if the answer has already been published. In addition they attempt to keep abreast of further developments as they continue their work.
pluralism: Publications include heterogeneous, overlapping and possibly conflicting information. There is no central arbiter of truth in scientific communities.
skepticism: Great effort is expended to test and validate current information and replace it with better information.
provenance: The provenance of information is carefully tracked and recorded.
The above characteristics are limited in real scientific communities. Publications are sometimes lost or difficult to retrieve. Concurrency is limited by resources including personnel and funding. Sometimes it is easier to rederive a result than to look it up. Scientists only have so much time and energy to read and try to understand the literature. Scientific fads sometimes sweep up almost everyone in a field. The order in which information is received can influence how it is processed. Sponsors can try to control scientific activities. In Ether the semantics of the kinds of activity described in this paragraph are governed by the actor model.
Scientific research includes generating theories and processes for modifying, supporting, and opposing these theories. Karl Popper called the process "conjectures and refutations", which although expressing a core insight, has been shown to be too restrictive a characterization by the work of Michel Callon, Paul Feyerabend, Elihu M. Gerson, Mark Johnson, Thomas Kuhn, George Lakoff, Imre Lakatos, Bruno Latour, John Law, Susan Leigh Star, Anselm Strauss, Lucy Suchman, Ludwig Wittgenstein, etc.. Three basic kinds of participation in Ether are proposing, supporting, and opposing. Scientific communities are structured to support competition as well as cooperation.
These activities affect the adherence to approaches, theories, methods, etc. in scientific communities. Current adherence does not imply adherence for all future time. Later developments will modify and extend current understandings. Adherence is a local rather than a global phenomenon. No one speaks for the scientific community as a whole.
Opposing ideas may coexist in communities for centuries. On rare occasions a community reaches a breakthrough that clearly decides an issue previously muddled.
Ether
Ether used viewpoints to relativist information in publications. However a great deal of information is shared across viewpoints. So Ether made use of inheritance so that information in a viewpoint could be readily used in other viewpoints. Sometimes this inheritance is not exact as when the laws of physics in Newtonian mechanics are derived from those of Special Relativity. In such cases Ether used translation instead of inheritance. Bruno Latour has analyzed translation in scientific communities in the context of actor network theory. Imre Lakatos studied very sophisticated kinds of translations of mathematical (e.g., the Euler formula for polyhedra) and scientific theories.
Viewpoints were used to implement natural deduction (Fitch [1952]) in Ether. In order to prove a goal of the form in a viewpoint , it is sufficient to create a new viewpoint that inherits from , assert in , and then prove in . An idea like this was originally introduced into programming language proving by Rulifson, Derksen, and Waldinger [1973] except since Ether is concurrent rather than being sequential it does not rely on being in a single viewpoint that can be sequentially pushed and popped to move to other viewpoints.
Ultimately resolving issues among these viewpoints are matters for negotiation (as studied in the sociology and philosophy of science by Geof Bowker, Michel Callon, Paul Feyerabend, Elihu M. Gerson, Bruno Latour, John Law, Karl Popper, Susan Leigh Star, Anselm Strauss, Lucy Suchman, etc.).
Emphasis on communities rather than individuals
Alan Turing was one of the first to attempt to more precisely characterize individual intelligence through the notion of his famous Turing Test. This paradigm was developed and deepened in the field of Artificial Intelligence. Allen Newell and Herbert A. Simon did pioneer work in analyzing the protocols of individual human problem solving behavior on puzzles. More recently Marvin Minsky has developed the idea that the mind of an individual human is composed of a society of agents in Society of Mind (see the analysis by Push Singh).
The above research on individual human problem solving is complementary to the scientific community metaphor.
Current applications
Some developments in hardware and software technology for the Internet are being applied in light of the scientific community metaphor.
Legal concerns (e.g., HIPAA, Sarbanes-Oxley, "The Books and Records Rules" in SEC Rule 17a-3/4 and "Design Criteria Standard for Electronic Records Management Software Applications" in DOD 5015.2 in the U.S.) are leading organizations to store information monotonically forever. It has just now become less costly in many cases to store information on magnetic disk than on tape. With increasing storage capacity, sites can monotonically record what they read from the Internet as well as monotonically recording their own operations.
Search engines currently provide rudimentary access to all this information. Future systems will provide interactive question answering broadly conceived that will make all this information much more useful.
Massive concurrency (i.e., Web services and multi-core computer architectures) lies in the future posing enormous challenges and opportunities for the scientific community metaphor. In particular, the scientific community metaphor is being used in client cloud computing.
See also
Paraconsistent logics
Planner
Science studies
The Structure of Scientific Revolutions
References
Further reading
Julian Davies. "Popler 1.5 Reference Manual" University of Edinburgh, TPU Report No. 1, May 1973.
Frederic Fitch. Symbolic Logic: an Introduction. Ronald Press, New York, 1952.
Ramanathan Guha. Contexts: A Formalization and Some Applications PhD thesis, Stanford University, 1991.
Pat Hayes. "Computation and Deduction" Mathematical Foundations of Computer Science: Proceedings of Symposium and Summer School, Štrbské Pleso, High Tatras, Czechoslovakia, September 3–8, 1973.
Carl Hewitt. "PLANNER: A Language for Proving Theorems in Robots" IJCAI 1969
Carl Hewitt. "Procedural Embedding of Knowledge In Planner" IJCAI 1971.
Carl Hewitt, Peter Bishop and Richard Steiger. "A Universal Modular Actor Formalism for Artificial Intelligence" IJCAI 1973.
Carl Hewitt. Large-scale Organizational Computing requires Unstratified Reflection and Strong Paraconsistency in "Coordination, Organizations, Institutions, and Norms in Agent Systems III" edited by Jaime Sichman, Pablo Noriega, Julian Padget and Sascha Ossowski. Springer. 2008.
Carl Hewitt. Development of Logic Programming: What went wrong, What was done about it, and What it might mean for the future What Went Wrong and Why: Lessons from AI Research and Applications; papers from the 2008 AAAI Workshop. Technical Report WS-08-14. AAAI Press. July 2008.
William Kornfeld and Carl Hewitt. "The Scientific Community Metaphor" IEEE Transactions on Systems, Man and Cybernetics, SMC-11. 1981
Bill Kornfeld. "The Use of Parallelism to Implement a Heuristic Search" IJCAI 1981.
Bill Kornfeld. Parallelism in Problem Solving MIT EECS Doctoral Dissertation. August 1981.
Bill Kornfeld. "Combinatorially Implosive Algorithms" CACM. 1982.
Robert Kowalski "Predicate Logic as Programming Language" Memo 70, Department of Artificial Intelligence, Edinburgh University. 1973
Imre Lakatos. "Proofs and Refutations" Cambridge: Cambridge University Press. 1976.
Bruno Latour. Science In Action: How to Follow Scientists and Engineers Through Society, Harvard University Press, Cambridge Mass., USA, 1987.
John McCarthy. "Generality in Artificial Intelligence" CACM. December 1987.
Jeff Rulifson, Jan Derksen, and Richard Waldinger. "QA4, A Procedural Calculus for Intuitive Reasoning" SRI AI Center Technical Note 73, November 1973.
Earl Sacerdoti, et al., "QLISP A Language for the Interactive Development of Complex Systems" AFIPS. 1976
Push Singh "Examining the Society of Mind" To appear in Computing and Informatics
Actor model (computer science)
Logic programming
Science studies
Philosophy of science
Theoretical computer science |
2073628 | https://en.wikipedia.org/wiki/Television%20encryption | Television encryption | Television encryption, often referred to as scrambling, is encryption used to control access to pay television services, usually cable, satellite, or Internet protocol television (IPTV) services.
History
Pay television exists to make revenue from subscribers, and sometimes those subscribers do not pay. The prevention of piracy on cable and satellite networks has been one of the main factors in the development of Pay TV encryption systems.
The early cable-based Pay TV networks used no security. This led to problems with people connecting to the network without paying. Consequently, some methods were developed to frustrate these self-connectors. The early Pay TV systems for cable television were based on a number of simple measures. The most common of these was a channel-based filter that would effectively stop the channel being received by those who had not subscribed. These filters would be added or removed according to the subscription. As the number of television channels on these cable networks grew, the filter-based approach became increasingly impractical.
Other techniques such as adding an interfering signal to the video or audio began to be used as the simple filter solutions were easily bypassed. As the technology evolved, addressable set-top boxes became common, and more complex scrambling techniques such as digital encryption of the audio or video cut and rotate (where a line of video is cut at a particular point and the two parts are then reordered around this point) were applied to signals.
Encryption was used to protect satellite-distributed feeds for cable television networks. Some of the systems used for cable feed distribution were expensive. As the DTH market grew, less secure systems began to be used. Many of these systems (such as OAK Orion) were variants of cable television scrambling systems that affected the synchronisation part of the video, inverted the video signal, or added an interfering frequency to the video. All of these analogue scrambling techniques were easily defeated.
In France, Canal+ launched a scrambled service in 1984. It was also claimed that it was an unbreakable system. Unfortunately for that company, an electronics magazine, "Radio Plans", published a design for a pirate decoder within a month of the channel launching.
In the US, HBO was one of the first services to encrypt its signal using the VideoCipher II system. In Europe, FilmNet scrambled its satellite service in September 1986, thus creating one of the biggest markets for pirate satellite TV decoders in the world, because the system that FilmNet used was easily hacked. One of FilmNet's main attractions was that it would screen hard-core porn films on various nights of the week. The VideoCipher II system proved somewhat more difficult to hack, but it eventually fell prey to the pirates.
Conditional access
Cable and early satellite television encryption
Analog and digital pay television have several conditional access systems that are used for pay-per-view (PPV) and other subscriber related services. Originally, analog-only cable television systems relied on set-top boxes to control access to programming, as television sets originally were not "cable-ready". Analog encryption was typically limited to premium channels such as HBO or channels with adult-oriented content. In those cases, various proprietary video synchronization suppression methods were used to control access to programming. In some of these systems, the necessary sync signal was on a separate subcarrier though sometimes the sync polarity is simply inverted, in which case, if used in conjunction with PAL, a SECAM L TV with a cable tuner can be used to partially descramble the signal though only in black and white and with inverted luminance and thus a multi standard TV which supports PAL L is preferred to decode the color as well. This, however will lead to a part of the video signal being received as audio as well and thus another TV with preferably no auto mute should be used for audio decoding. Analog set-top boxes have largely been replaced by digital set-top boxes that can directly control access to programming as well as digitally decrypt signals.
Although several analog encryption types were tested in the early 1980s, VideoCipher II became the de facto analog encryption standard that C-Band satellite pay TV channels used. Early adopters of VCII were HBO and Cinemax, encrypting full time beginning in January 1986; Showtime and The Movie Channel beginning in May 1986; and CNN and Headline news, in July of that year. VideoCipher II was replaced as a standard by VCII+ in the early 1990s, and it in turn was replaced by VCII+ RS. A VCII-capable satellite receiver is required to decode VCII channels. VCII has largely been replaced by DigiCipher 2 in North America. Originally, VCII-based receivers had a separate modem technology for pay-per-view access known as Videopal. This technology became fully integrated in later-generation analog satellite television receivers.
VideoCipher I (deprecated)
VideoCipher II (deprecated)
VideoCipher II+
VideoCipher II RS (Renewable Security)
Digital cable and satellite television encryption
DigiCipher 2 is General Instrument's proprietary video distribution system. DigiCipher 2 is based upon MPEG-2. A 4DTV satellite receiver is required to decode DigiCipher 2 channels. In North America, most digital cable programming is accessed with DigiCipher 2-based set-top boxes. DigiCipher 2 may also be referred to as DCII.
PowerVu is another popular digital encryption technology used for non-residential usage. PowerVu was developed by Scientific Atlanta. Other commercial digital encryption systems are, Nagravision (by Kudelski), Viaccess (by France Telecom), and Wegener.
In the US, both DirecTV and Dish Network direct-broadcast satellite systems use digital encryption standards for controlling access to programming. DirecTV uses VideoGuard, a system designed by NDS. DirecTV has been cracked in the past, which led to an abundance of cracked smartcards being available on the black market. However, a switch to a stronger form of smart card (the P4 card) wiped out DirectTV piracy soon after it was introduced. Since then, no public cracks have become available. Dish Network uses Nagravision (2 and 3) encryption. The now-defunct VOOM and PrimeStar services both used General Instruments/Motorola equipment, and thus used a DigiCipher 2-based system very similar to that of earlier 4DTV large dish satellite systems.
In Canada, both Bell Satellite TV and Shaw Direct DBS systems use digital encryption standards. Bell TV, like Dish Network, uses Nagravision for encryption. Shaw Direct, meanwhile, uses a DigiCipher 2-based system, due to their equipment also being sourced from General Instruments/Motorola.
Older television encryption systems
Zenith Phonevision
Zenith Electronics developed an encryption scheme for their Phonevision system of the 1950s and 1960s.
Oak ORION
Oak Orion was originally used for analog satellite television pay channel access in Canada. It was innovative for its time as it used digital audio. It has been completely replaced by digital encryption technologies. Oak Orion was used by Sky Channel in Europe between the years 1982 and 1987, and M-Net in South Africa from 1986 to 2018. Oak developed related encryption systems for cable TV and broadcast pay TV services such as ONTV.
Leitch Technology
Leitch Viewguard is an analog encryption standard used primarily by broadcast TV networks in North America. Its method of scrambling is by re-ordering the lines of video (Line Shuffle), but leaves the audio intact. Terrestrial broadcast CATV systems in Northern Canada used this conditional access system for many years. It is only occasionally used today on some satellite circuits because of its similarity to D2-MAC and B-MAC.
There was also a version that encrypted the audio using a digital audio stream in the horizontal blanking interval like the VCII system. One US network used that for its affiliate feeds and would turn off the analog sub carriers on the satellite feed.
B-MAC
B-MAC has not been used for DTH applications since PrimeStar switched to an all-digital delivery system in the mid-1990s.
VideoCrypt
VideoCrypt was an analogue cut and rotate scrambling system with a smartcard based conditional access system. It was used in the 1990s by several European satellite broadcasters, mainly British Sky Broadcasting. It was also used by Sky New Zealand (Sky-NZ). One version of Videocrypt (VideoCrypt-S) had the capability of scrambling sound. A soft encryption option was also available where the encrypted video could be transmitted with a fixed key and any VideoCrypt decoder could decode it.
RITC Discret 1
RITC Discret 1 is a system based on horizontal video line delay and audio scrambling. The start point of each line of video was pseudorandomly delayed by either 0 ns, 902 ns, or 1804 ns. First used in 1984 by French channel Canal Plus, it was widely compromised after the December 1984 issue of "Radio Plans" magazine printed decoder plans. The BBC also used the Discret system in the late 1980s, as part of testing the use of off-air hours for encrypted specialist programming, with BMTV (British Medical Television) being broadcast on BBC Two. This would ultimately lead to the launch of the scrambled BBC Select service in the early 1990s.
SATPAC
Used by European channel FilmNet, the SATPAC interfered with the horizontal and vertical synchronisation signals and transmitted a signal containing synchronisation and authorisation data on a separate subcarrier. The system was first used in September 1986 and saw many upgrades as it was easily compromised by pirates. By September 1992, FilmNet changed to D2-MAC EuroCrypt.
Telease MAAST / Sat-Tel SAVE
Added an interfering sine wave of a frequency circa 93.750 kHz to the video signal. This interfering signal was approximately six times the frequency of the horizontal refresh. It had an optional sound scrambling using Spectrum Inversion. Used in the UK by BBC for its world service broadcasts and by the now defunct UK movie channel "Premiere".
Payview III
Used by German/Swiss channel Teleclub in the early 1990s, this system employed various methods such as video inversion, modification of synchronisation signals, and a pseudo line delay effect.
D2-MAC EuroCrypt
Conditional Access system using the D2-MAC standard. Developed mainly by France Telecom, the system was smartcard based. The encryption algorithm in the smartcard was based on DES. It was one of the first smart card based systems to be compromised.
Nagravision analogue system
An older Nagravision system for scrambling analogue satellite and terrestrial television programs was used in the 1990s, for example by the German pay-TV broadcaster Premiere. In this line-shuffling system, 32 lines of the PAL TV signal are temporarily stored in both the encoder and decoder and read out in permuted order under the control of a pseudorandom number generator. A smartcard security microcontroller (in a key-shaped package) decrypts data that is transmitted during the blanking intervals of the TV signal and extracts the random seed value needed for controlling the random number generation. The system also permitted the audio signal to be scrambled by inverting its spectrum at 12.8 kHz using a frequency mixer.
See also
Conditional access
Pirate decryption
References
External links
rec.video.satellite.tvro FAQ
Digital rights management
Television technology
Digital rights management systems |
41618516 | https://en.wikipedia.org/wiki/Superfish | Superfish | Superfish was an advertising company that developed various advertising-supported software products based on a visual search engine. The company was based in Palo Alto, California. It was founded in Israel in 2006 and has been regarded as part of the country's "Download Valley" cluster of adware companies. Superfish's software is malware and adware. The software was bundled with various applications as early as 2010, and Lenovo began to bundle the software with some of its computers in September 2014. On February 20, 2015, the United States Department of Homeland Security advised uninstalling it and its associated root certificate, because they make computers vulnerable to serious cyberattacks, including interception of passwords and sensitive data being transmitted through browsers.
History
Superfish was founded in 2006 by Adi Pinhas and Michael Chertok. Pinhas is a graduate of Tel Aviv University. In 1999, he co-founded Vigilant Technology, which "invented digital video recording for the surveillance market", according to his LinkedIn profile. Before that, he worked at Verint, an intelligence company that analyzed telephone signals and had allegedly tapped Verizon communication lines. Chertok is a graduate of Technion and Bar-Ilan University with 10 years of experience in "large scale real-time data mining systems".
Since its founding, Superfish has used a team of "a dozen or so PhDs" primarily to develop algorithms for the comparison and matching of images. It released its first product, WindowShopper, in 2011. WindowShopper immediately prompted a large number of complaints on Internet message boards, from users who did not know how the software had been installed on their machines.
Superfish initially received funding from Draper Fisher Jurvetson, and to date has raised over $20 million, mostly from DFJ and Vintage Investment Partners. Forbes listed the company as number 64 on their list of America's most promising companies.
Pinhas in 2014 stated that "Visual search is not here to replace the keyboard ... visual search is for the cases in which I have no words to describe what I see."
As of 2014, Superfish products had over 80 million users.
In May 2015, following the Lenovo security incident (see below) and to distance itself from the fallout, the team behind Superfish changed its name and moved its activities to JustVisual.com.
Lenovo security incident
Users had expressed concerns about scans of SSL-encrypted web traffic by Superfish Visual Search software pre-installed on Lenovo machines since at least early December 2014. This became a major public issue, however, only in February 2015. The installation included a universal self-signed certificate authority; the certificate authority allows a man-in-the-middle attack to introduce ads even on encrypted pages. The certificate authority had the same private key across laptops; this allows third-party eavesdroppers to intercept or modify HTTPS secure communications without triggering browser warnings by either extracting the private key or using a self-signed certificate.
On February 20, 2015, Microsoft released an update for Windows Defender which removes Superfish. In an article in Slate tech writer David Auerbach compares the incident to the Sony DRM rootkit scandal and said of Lenovo's actions, "installing Superfish is one of the most irresponsible mistakes an established tech company has ever made." On February 24, 2015, Heise Security published an article revealing that the certificate in question would also be spread by a number of applications from other companies including SAY Media and Lavasoft's Ad-Aware Web Companion.
Criticisms of Superfish software predated the "Lenovo incident" and were not limited to the Lenovo user community: as early as 2010, users of computers from other manufacturers had expressed concerns in online support and discussion forums that Superfish software had been installed on their computers without their knowledge, by being bundled with other software.
CEO Pinhas, in a statement prompted by the Lenovo disclosures, maintained that the security flaw introduced by Superfish software was not, directly, attributable to its own code; rather, "it appears [a] third-party add-on introduced a potential vulnerability that we did not know about" into the product. He identified the source of the problem as code authored by the tech company Komodia, which deals with, among other things, website security certificates. Komodia code is also present in other applications, among them, parental-control software; and experts have said "the Komodia tool could imperil any company or program using the same code" as that found within Superfish. In fact, Komodia itself refers to its HTTPS-decrypting and interception software as an "SSL hijacker", and has been doing so since at least January 2011. Its use by more than 100 corporate clients may jeopardize "the sensitive data of not just Lenovo customers but also a much larger base of PC users". Komodia was closed in 2018.
Products
Superfish's first product, WindowShopper, was developed as a browser add-on for desktop and mobile devices, directing users who hover over browser images to shopping Web sites to purchase similar products. As of 2014, WindowShopper had approximately 100 million monthly users, and according to Xconomy, "a high conversion to sale rate for soft goods". Superfish's business model is based on receiving affiliate fees on each sale.
The core technology, Superfish VisualDiscovery, is installed as a man-in-the-middle proxy on some Lenovo laptops. It injects advertising into results from Internet search engines; it also intercepts encrypted (SSL/TLS) connections.
In 2014, Superfish released new apps based on its image search technology.
See also
Browser hijacking
Computer vision
Concept-based image indexing
Content-based image retrieval
Image processing
Image retrieval
Malware
References
2006 establishments in California
Companies based in Palo Alto, California
Digital marketing companies of the United States
Software companies established in 2006
Adware |
19468225 | https://en.wikipedia.org/wiki/LOCUS | LOCUS | LOCUS is a discontinued distributed operating system developed at UCLA during the 1980s. It was notable for providing an early implementation of the single-system image idea, where a cluster of machines appeared to be one larger machine.
A desire to commercialize the technologies developed for LOCUS inspired the creation of the Locus Computing Corporation which went on to include ideas from LOCUS in various products, including OSF/1 AD and, finally, the SCO–Tandem UnixWare NonStop Clusters product.
Description
The LOCUS system was created at UCLA between 1980 and 1983, initial implementation was on a cluster of PDP-11/45s using 1 and 10 megabit ring networks, by 1983 the system was running on 17 VAX-11/750s using a 10 megabit Ethernet. The system was Unix compatible and provided both a single root view of the file system and a unified process space across all nodes.
The development of LOCUS was supported by an ARPA research contract, DSS-MDA-903-82-C-0189.
File system
In order to allow reliable and rapid access to the cluster wide filesystem LOCUS used replication, the data of files could be stored on more than one node and LOCUS would keep the various copies up to date. This provided particularly good access times for files that were read more often than they were written, the normal case for directories for example.
In order to ensure that all access was made to the most recent version of any file LOCUS would nominate one node as the "current synchronization site" (CSS) for a particular file system. All accesses to files a file system would need to be coordinated with the appropriate CSS.
Node dependent files
As with other SSI systems LOCUS sometimes found it necessary to break the illusion of a single system, notably to allow some files to be different on a per-node basis. For example, it was possible to build a LOCUS cluster containing both PDP-11/45 and VAX 750 machines, but instruction sets used were not identical, so two versions of each object program would be needed
The solution was to replace the files that needed to be different on a per node basis by special hidden directories. These directories would then contain the different versions of the file. When a user accessed one of these hidden directories the system would check the user's context and open the appropriate file.
For example, if the user was running on one of the PDP-11/45's and typed the command /bin/who then the system would find that /bin/who was actually a hidden directory and run the command /bin/who/45. Another user on a VAX node who typed /bin/who would run the command /bin/who/vax.
Devices
LOCUS provided remote access to I/O devices.
Processes
LOCUS provided a single process space. Processes could be created on any node on the system. Both the Unix fork and exec calls would examine an advice list which determined on which node the process would be run. LOCUS was designed to work with heterogeneous nodes, (e.g., a mix of VAX 750s and PDP 11/45s) and could decide to execute a process on a different node if it needed a particular instruction set. As an optimization a run call was added which was equivalent to a combined fork and exec, thus avoiding the overhead of copying the process memory image to another node before overwriting it by the new image.
Pipes
Processes could use pipes for inter node communication, including named pipes,
Partitioning
The LOCUS system was designed to be able to cope with network partitioning - one or more nodes becoming disconnected from the rest of the system. As the file system was replicated the disconnected nodes could continue to access files. When the nodes were reconnected any files modified by the disconnected nodes would be merged back into the system. For some file types (for example mailboxes) the system would perform the merge automatically, for others the user would be informed (by mail) and tools were provided to allow access to the different versions of the file.
Notes
References
Proprietary operating systems
Cluster computing
Distributed operating systems |
12751285 | https://en.wikipedia.org/wiki/Bootstrapping%20Server%20Function | Bootstrapping Server Function | A Bootstrapping Server Function (BSF) is an intermediary element in Cellular networks which provides application-independent functions for mutual authentication of user equipment and servers unknown to each other and for 'bootstrapping' the exchange of secret session keys afterwards. This allows the use of additional services like Mobile TV and PKI, which need authentication and secured communication.
GBA/GAA Setup
The setup and function to deploy a generic security relation as described is called Generic Bootstrapping Architecture (GBA) or Generic Authentication Architecture (GAA). In short, it consists of the following elements.
user equipment (UE), e. g. a mobile cellular telephone; needs access to a specific service
application server (NAF: Network Application Function), e. g. for mobile TV; provides the service
BSF (Bootstrapping Server Function); arranges security relation between UE and NAF
mobile network operator's Home Subscriber Server (HSS); hosts user profiles.
In this case, the term 'bootstrapping' is related to building a security relation with a previously unknown device first and to allow installing security elements (keys) in the device and the BSF afterwards.
Workflow
The BSF is introduced by the application server (NAF), after an unknown UE device is trying to get service access: the NAF refers the UE to the BSF. UE and BSF mutually authenticate via 3GPP protocol AKA (Authentication and Key Agreement); additionally, the BSF sends related queries to the Home Subscriber Server (HSS).
Afterwards, UE and BSF agree on a session key to be used for encrypted data exchange with the application server (NAF). When the UE again connects to the NAF, the NAF is able to obtain the session key as well as user-specific data from the BSF and can start data exchange with the end device (UE), using the related session keys for encryption.
Standards
BSF is standardised in recent versions of 3GPP Standards: GAA (Generic Authentication Architecture) and GBA (Generic Bootstrapping Architecture), and 3GPP TS 33.919, 33.220 24.109, 29.109
External links
DVB-H News
BMCO forum
Open Mobile Alliance
3GPP
BSF in LTE network
castLabs (commercial BSF supplier)
Nexcom Systems (OEM commercial BSF supplier)
3GPP TS 24.109 version 8.3.0 Release 8
Mobile telecommunications standards |
2473213 | https://en.wikipedia.org/wiki/Skype%20for%20Business | Skype for Business | Skype for Business (formerly Microsoft Lync and Office Communicator) was an enterprise software application for instant messaging and videotelephony developed by Microsoft as part of the Microsoft Office suite. It is designed for use with the on-premises Skype for Business Server software, and a software as a service version offered as part of Office 365. It supports text, audio, and video chat, and integrates with Microsoft Office components such as Exchange and SharePoint.
In 2015, the software was rebranded from Lync to Skype for Business, co-branding it with the Microsoft-owned consumer messaging platform Skype (which had begun to integrate with Lync in 2013).
In September 2017, Microsoft announced that it would phase out Skype for Business in favor of Microsoft Teams, a new cloud-based collaboration platform. Support for Skype for Business Online ended in July 2021, and Skype for Business Server 2019 will receive extended support through October 14, 2025.
History
Microsoft released Office Communicator 2007 to production on July 28, 2007 and launched it on October 27, 2007. It was followed by Office Communicator 2007 R2, released on March 19, 2009. Microsoft released the successor to Office Communicator, Lync 2010, on January 25, 2011. In November 2010, the platform was renamed Lync.
In May 2013, Microsoft announced that it would allow Lync users to communicate with Skype, a consumer IM platform it had acquired in 2011. This initially included support for text and voice communications. On November 11, 2014, Microsoft announced that Lync would be renamed Skype for Business in 2015, also adding support for video calls with Skype users.
On September 22, 2015, Skype for Business 2016 was released alongside Office 2016. On October 27, 2016, the Skype for Business for Mac client was released.
On September 25, 2017, Microsoft announced that Skype for Business would be discontinued in the future in favor of Microsoft Teams, a cloud-based collaboration platform for corporate groups (comparable to Slack) integrating persistent messaging, video conferencing, file storage, and application integration. Microsoft released a final on-premises version of Skype for Business Server as part of Office 2019 in late 2018, and announced in July 2019 that the hosted Skype for Business Online will cease functioning on July 31, 2021. Since September 2019, Skype for Business Online is no longer offered to new Office 365 subscribers, and are being directed to Microsoft Teams instead. Skype for Business Server 2019 is supported through October 14, 2025.
Versions
Exchange 2000 Conferencing
Windows Messenger 5.0 (Live Communications Server 2003)
Windows Messenger 5.1 and Microsoft Office Communicator 2005 (Live Communications Server 2005)
Office Communicator 2007
Office Communicator 2007 R2
Lync 2010
Lync 2013
Skype for Business 2015
Skype for Business 2016
Skype for Business 2019
Features
The Basic features of Skype for Business include:
Instant messaging (IM)
Audio call
Video call
Advanced features relate to integration with other Microsoft software:
Availability of contacts based on Microsoft Outlook contacts stored in a Microsoft Exchange Server
Users can retrieve contact lists from a local directory service such as Microsoft Exchange Server
Microsoft Office can show if other people are working on the same document
All communication between the clients takes place through a Skype for Business Server. This makes communications more secure, as messages do not need to leave the corporate intranet, unlike with the Internet-based Windows Live Messenger. The server can be set to relay messages to other instant messaging networks, avoiding installation of extra software at the client side.
A number of client types are available for Microsoft Skype for Business, including mobile clients.
Uses SIP as the basis for its client communication protocol
Offers support for TLS and SRTP to encrypt and secure signaling and media traffic
Allows sharing files
Note: With the release of Lync Server 2013 in October 2012, a new collaboration feature "Persistent Group Chat" which allows multi-party chat with preservation of content between chat sessions was introduced. However, only the native Windows OS client and no other platform supports this feature at this time. The main new features of this version are the addition of real-time multi-client collaborative software capabilities, (which allow teams of people to see and simultaneously work on the same documents and communications session). Lync and Skype for Business implement these features as follows:
Collaboration through Whiteboard documents, where the participants have freedom to share text, drawing and graphical annotations.
Collaboration through PowerPoint documents, where the participants can control and see presentations, as well as allow everybody to add text, drawing and graphical annotations.
Polling lists, where Presenters can organize polls and all participants can vote and see results.
Desktop sharing, usually by allowing participants to see and collaborate on a Windows screen
Windows applications sharing, by allowing participants to see and collaborate on a specific application.
All collaboration sessions get automatically defined as conferences, where clients can invite more contacts.
Conference initiators (usually called "organizers") can either promote participants to act as presenters or demote them to act as attendees. They can also define some basic policies about what presenters and attendees can see and do. Deeper details of policy permissions are defined at server level.
Following Microsoft's acquisition of Skype in May 2011, the Lync and Skype platforms could be connected, but sometimes only after lengthy provisioning time.
Extensions
Skype for Business uses a number of extensions to the SIP/SIMPLE instant-messaging protocol for some features. As with most instant-messaging platforms, non-Microsoft instant-messaging clients that have not implemented these publicly available extensions may not work correctly or have complete functionality. Skype for Business supports federated presence and IM to other popular instant message services such as AOL, Yahoo, MSN, and any service using the XMPP protocol, although support for XMPP has been deprecated in Skype for Business 2019. Text instant-messaging in a web browser is available via Skype for Business integration within Exchange Outlook Web App.
Although other IM protocols such as AIM and Yahoo! do have wider support by third-party clients, these protocols have been largely reverse-engineered by outside developers. Microsoft does offer details of its extensions on MSDN and provides an API kit to help developers build platforms that can interoperate with Skype for Business Server and clients.
Clients
As of May 2018, the following Skype for Business clients are available:
Windows (Pro and Enterprise only, can download free Skype for Business Basic client) and macOS (included with Office 365)
Linux (provided by TEL.RED)
iOS (Microsoft app in iTunes app store; alternative client provided by TEL.RED)
Android (Microsoft app in Google Play; alternative client provided by TEL.RED)
Windows Phone and Windows 10 Mobile apps were discontinued by Microsoft in May 2018.
See also
Similar discontinued Microsoft products
Windows Meeting Space
Microsoft NetMeeting
Microsoft Office Live Meeting
Others
Comparison of web conferencing software
List of Microsoft–Nortel Innovative Communications Alliance products
References
External links
Download Skype for Business Apps Across All Your Devices
Download Microsoft Skype for Business Basic from Official Microsoft Download Center
Install Skype for Business - Office Support
Skype for Business Online to Be Retired in 2021
Skype
Innovative Communications Alliance products
Microsoft Office
Windows instant messaging clients
Videotelephony
IOS software
Android (operating system) software
Business chat software
2015 software
Microsoft articles needing attention |
30299718 | https://en.wikipedia.org/wiki/Erwin%20Tomash | Erwin Tomash | Erwin Tomash (November 17, 1921 – December 10, 2012) was an American engineer who co-founded Dataproducts Corporation, which specialized in computer technology, specifically printers and core memory units. He is recognized for his early pioneering work with computer equipment peripherals. Tomash led the creation of the Charles Babbage Institute and is responsible for The Adelle and Erwin Tomash Fellowship in the History of Information Technology and The Erwin Tomash Library. He died at age 91 in his home in Soquel, California due to complications from Alzheimer's disease.
Education
Born and raised in Saint Paul, Minnesota, Erwin Tomash graduated from the University of Minnesota with his electrical engineering degree in 1943.
Early life
Upon graduating from the University of Minnesota, Tomash joined the U.S. Army Signal Corps, where he worked with radar, and was awarded the Bronze Star for his wartime activities. Following his time with the Army Signal Corps, Tomash served at the Naval Ordnance Laboratory briefly, before joining the Engineering Research Associates. As a research associate, he worked on developing electronic computers, including the ERA 1103 or UNIVAC Scientific. In 1956, he joined Telemeter Magnetic in Los Angeles where he became the company's president. He then oversaw Telemeter Magnetics' design of core memories for computers and in 1962 left Telemeter Magnetic, and co-founded Dataproducts Corporation.
Dataproducts Corporation
Dataproducts Corporation was co-founded by Erwin Tomash in 1962, and specialized in computer peripherals, with a focus on printers. In 1966, core memory was added to the product line, and due to its resulting expansion, the company relocated to Woodland Hills, Los Angeles, California in 1968. The company acquired Staff Dynamics, a personnel agency, and Uptime, a manufacturer of card readers; it also served as an incubator for Informatics, an early software company. By 1970 the company had become the world's leading independent printer manufacturer. In 1980 Tomash retired and Graham Tyson, already chief operating officer and president, succeeded him as chairman.
Awards/Accomplishments
In 1987 Erwin Tomash was honored by the IEEE Computer Society, and received the Computer Entrepreneur Award in recognition of his early pioneering work with computer peripherals.
Erwin and his wife Adelle Tomash were instrumental in establishing the Charles Babbage Institute, which honorably named a highly regarded library, archives, and a fellowship program after them, as well as the CBI Tomash computer history reprint series.
Erwin and Adelle Tomash, as well as the Tomash Family Foundation, were recognized in a 2009-2010 philanthropy report by the University of California, Santa Cruz (UCSC) as having contributed a gift of $1,000 or more to specific programs at the university.
The Erwin Tomash Library
The Erwin Tomash Library on The History of Computing is an annotated and illustrated catalog documenting a collection of books and manuscripts related to the history of computing. It was assembled, over the course of many years, by Erwin Tomash using his knowledge as a pioneer in the development of computers. The collection consists of over five thousand items from twelfth century manuscripts to modern publications, and documents the rarest items together with a series of essays that explain the uses of little known instruments and techniques that are discussed in the entries. Each entry consists of the bibliographic details, some biographical information on the author, a description of the contents, and illustrations of interesting pages and diagrams. The library catalog, almost 1600 pages long, can be found on the IEEE Computer Society website as well as the CBI website. A portion of the Erwin Tomash Library (post-1954 volumes) was donated to CBI and is publicly accessible there at the University of Minnesota.
The contents of the Erwin Tomash library were sold at auction by Sotheby's in London in September 2018. The copy of Galileo's Difesa contro alle Calunnie et Imposture di Baldessar Capra Milanese (1607), with a handwritten inscription by Galileo, sold for £466,000 (US$616,378).
The Tomash Fellowship
The Adelle and Erwin Tomash Fellowship in the History of Information Technology is awarded to a graduate student for doctoral dissertation research in the history of computing. The fellowship is to be held at the recipient's home academic institution, the Charles Babbage Institute, or any other location with appropriate research facilities. It is intended for students who have completed all requirements for the doctoral degree except the research and writing of the dissertation.
References
External links
Erwin Tomash Collection of Dataproducts Corporation Records (1962-82), Charles Babbage Institute, University of Minnesota. Original business plan, publications, reports, organizational charts and employee lists, articles, and correspondence that document the company's growth and market position in printers and core memories.
Oral history interview with Erwin Tomash. Oral history interview by Robina Mapstone, 14 March and 5 April 1973, Woodland Hills, Calif. Charles Babbage Institute, University of Minnesota. Tomash discusses his work with Engineering Research Associates (ERA), including the firm's management, the roles of William Norris, Frank Mullaney, and Arnold Cohen in ERA, Tomash's development of West Coast marketing for ERA after it became a part of Remington Rand, competition with International Business Machines, the development of Williams tube storage devices and core memory, and the ERA 1103 computer. He also recounts his move to Telemeter Magnetics, later Ampex Computer Products, the formation of Dataproducts Corporation and its subsidiary, Informatics Inc., headed by Walter F. Bauer.
Oral history interview with Erwin Tomash. Oral history interview by Arthur L. Norberg, 15 May 1983, Los Angeles, California. Charles Babbage Institute, University of Minnesota, Minneapolis. Tomash discusses his career, including employment at Engineering Research Associates (ERA) and the founding of Dataproducts. He begins with his electrical engineering education at the University of Minnesota in the early 1940s and his subsequent entry into the Army Signal Corps as a radar specialist. Tomash recalls his departure in 1956 from Remington Rand to Telemeter Magnetics, where he soon became president. This company manufactured core memory systems and one of the first successful transistor memory systems. Tomash explains how he used the organization he and others had assembled from Telemeter Magnetics to found Dataproducts Corporation in 1962.
Adelle and Erwin Tomash Fellowship in the History of Information Technology. Charles Babbage Institute, University of Minnesota. Fellowship is awarded annually to a graduate student for doctoral dissertation research in the history of computing.
CBI–Tomash Reprint Series in the History of Computing. Charles Babbage Institute, University of Minnesota. Reprints, with an expert's introduction, of difficult-to-obtain monographs, conference proceedings, manuals, and government reports.
Erwin Tomash and Michael R. Williams, The Erwin Tomash Library on the History of Computing: An Annotated and Illustrated Catalog. Charles Babbage Institute, University of Minnesota. Published in 2008, this 1600-page catalog describes in detail Erwin Tomash's extensive library documenting the origins of computing. The books date from 1180 to 1955 and include information about all forms of reckoning and other aids to calculation. Each entry consists of bibliographic details, biographical information on the author, and a description of the contents, and there are many illustrations of interesting pages and diagrams. Tomash's post-1955 books were donated to the Charles Babbage Institute and are catalogued and publicly accessible
1921 births
2012 deaths
20th-century American Jews
American people of Russian-Jewish descent
American people of Romanian-Jewish descent
University of Minnesota College of Science and Engineering alumni
People from Saint Paul, Minnesota
People from Soquel, California
American electrical engineers
21st-century American Jews
United States Army personnel of World War II |
29128888 | https://en.wikipedia.org/wiki/Cinavia | Cinavia | Cinavia, originally called Verance Copy Management System for Audiovisual Content (VCMS/AV), is an analog watermarking and steganography system under development by Verance since 1999, and released in 2010. In conjunction with the existing Advanced Access Content System (AACS) digital rights management (DRM) inclusion of Cinavia watermarking detection support became mandatory for all consumer Blu-ray Disc players from 2012.
The watermarking and steganography facility provided by Cinavia is designed to stay within the audio signal and to survive all common forms of audio transfer, including lossy data compression using discrete cosine transform, MP3, DTS, or Ogg Vorbis. It is designed to survive digital and analog sound recording and reproduction via microphones, direct audio connections and broadcasting, and does so by using audio frequencies within the hearing range. It is monaural and not a multichannel codec.
Cinavia's in-band signaling introduces intentional spread spectrum phase distortion in the frequency domain of each individual audio channel separately, giving a per-channel digital signal that can yield up to around 0.2 bits per second—depending on the quantization level available, and the desired trade-off between the required robustness and acceptable levels of psychoacoustic perceptibility. It is intended to survive analog distortions such as the wow and flutter and amplitude modulation from magnetic tape sound recording. On playback, no additional audio filters are used to cover up the distortions and discontinuities introduced.
The signal survives temporal masking and sub-band coding by operating on the fundamental frequency and its subharmonic overtones, and by dealigning the phase relationship between the strongest signal and its subharmonics. Each phase discontinuity introduced by the encoder will result in a corresponding pulse of wideband white noise, so a further range of additional distortions are introduced as a noise mitigation strategy to compensate. The desired hidden digital data signal is combined in the distortion step using a pre-determined pseudorandom binary sequence for audio frame synchronization and large amounts of forward error correction for the hidden data to be embedded. The watermark is only embedded when certain signal-to-noise ratio thresholds are met and is not available as a continuous signal—the signal must be monitored for a period of time before the embedded data can be detected and recovered. Extraction of the hidden signal is not exact but is based on recovering the convolutional codes through statistical cross-correlation.
The Blu-ray Disc implementation of Cinavia is designed to cover two use-cases: the first is the provision of a Cinavia watermark on all movie theater soundtracks released via film distribution networks; the second use-case is for the provision of a Cinavia watermark on all Blu-ray Disc releases that points to the presence of an accompanying AACS key. If a "theatrical release" watermark is detected in a consumer Blu-ray Disc audio track, the accompanying video is deemed to have been sourced from a "cam" recording. If the "AACS watermark" is present in the audio tracks, but no accompanying and matching AACS key is found on the disc, then it is deemed to have been a "rip" made by copying to a second blank Blu-ray Disc.
, known hardware players which can detect Cinavia watermarks include the PlayStation 3 (began with v3.10 System Software), as well as newer Blu-ray Disc players.
Overview
Cinavia works to prevent copying via the detection of a watermark recorded into the analog audio of media such as theatrical films and Blu-ray Discs. The intent is to prevent all copying, both counterfeit copies and legal copies of one's own content (for example, format shifting).
Verance claims on their website that, while the watermark is able to survive recording through microphones (such as recording a film in a movie theater with a camcorder), as well as compression and encoding, it is imperceptible to human hearing, and the presence of the watermark does not affect audio quality.
When media with the watermark is played back on a system with Cinavia detection, its firmware will detect the watermark and check that the device on which it is being played is authorized for that watermark. If the device is not authorized (such as not being an authorized movie projector in the case of a cam bootleg, or not utilizing AACS in the case of a copy of a commercial Blu-ray Disc or CSS in the case of a copy of a commercial DVD), a message is displayed (either immediately or after a set duration) stating that the media is not authorized for playback on the device and that users should visit the Cinavia web page for more information. Depending on the device and firmware, once the message is triggered, the audio may be muted, or playback may stop entirely.
Messages
Following an intervention by the Cinavia+AACS system, one of four messages is displayed to reflect the specific situation in which a watermark was detected. The messages are numbered "Cinavia message code 1–4", allowing the messages themselves to be easily translated for consumers in different languages:
Message Code 1: Playback stopped—Shown when theatre- or hotel-distributed audio content is being played back on a consumer playback device.
Message Code 2: Copying stopped—Shown when theatre- or hotel-distributed audio content is being recorded by a consumer recording device.
Message Code 3: Audio muted—Shown when consumer-sold audio content is being played back from an optical disc, without the matching AACS key present at the centre of the disc.
Message Code 4: Copying stopped—Shown when consumer-sold audio content is being recorded by a consumer recording device.
Licensing
For Cinavia the owners Verance make their money through licensing agreements with several sections of the entertainment and media industry. these licence costs due to Verance were $10,000–$300,000 per manufacturer of Blu-ray Disc players—for the rights to embed the Cinavia detection system—plus additional software costs for the implementation itself. Production facilities need to pay $50 for each audio track that is watermarked with Cinavia. Distribution houses must finally pay $0.04 per disc with Cinavia watermarked content included.
Technical aspects
Verance claims Cinavia has the following features:
Only a single channel of audio is required to detect the watermark.
The watermark is able to survive re-recording through a microphone.
The watermark can be detected through "the production, duplication, distribution, broadcast, and consumer handling of recorded content". (In the white paper for their DVD-Audio Detector Compliance Verification Suite all tests are single-channel files.)
Different copies of otherwise identical works can be distinguished.
DVD-Audio
The data throughput for a watermarking system used for DVD-Audio requirement is for "Watermark Output: 3 water-mark data bits per 15 seconds (2 CCI bits and 1 SDMI Trigger Bit)". The two CCI bits in the example contain Digital Copy Control Information, while the succession of SDMI bits contains Secure Digital Music Initiative data when reconstructed. Also in the Compliance Verification Suite the lowest sample rate test is at 16,000 samples per second with 16 bits per sample. This could indicate that the bandwidth requirements top out at 8 kHz.
History
On 5 June 2009, the licensing agreements for AACS were finalized, which were updated to make Cinavia detection on commercial Blu-ray Disc players a requirement.
On 3 July 2009, Maxim Anisiutkin published an open source DVD Audio watermark detector and neutralizer computer program to the SourceForge web site. The software package contains a detailed description of the method and embedding parameters used in creating the DVD Audio or SDMI (Secure Digital Music Initiative) watermark, which was created by Verance Inc and was the earlier version of the Cinavia watermarking technology.
From January 2013 onwards, attempts were made by third-party software suppliers to make use of existing bugs and loopholes in Blu-ray Disc players to avoid Cinavia message triggering, but without any attempt being made at precisely removing the Cinavia signal from the audio. These attempts included iDeer Blu-ray Player, DVDFab and AnyDVD HD (version 7.3.1.0) which used workarounds to avoid Cinavia-enabled software Blu-ray Disc players from triggering Cinavia detection messages.
In August 2013, DVD-Ranger released a white paper detailing their methods for detecting, and subsequently removing, the present Cinavia signal from audio files. The DVD-Ranger CinEx beta software synchronises and detects the Cinavia signal in the same way as a consumer Cinavia detection routine; these identified parts of the audio stream are permanently removed, removing the Cinavia signal. Post-processing can be used to try to "fill-in" the audible gaps created.
There are claims that Cinavia can be removed using open source software like Audacity with an extracted audio file from a video source. The audio file is processed by decreasing pitch by 13%; the processed audio file is then merged back into the video source. This renders the Cinavia watermark unreadable, however the reduction in pitch can be easily noticed.
References
Further reading
.
Continued and republished as
Google patents search for "Rade Petrovic"
Google patents search for "Babak Tehranchi"
External links
Verance
Digital rights management systems
Digital watermarking
Compact Disc and DVD copy protection
Blu-ray Disc |
314935 | https://en.wikipedia.org/wiki/Mars%20Climate%20Orbiter | Mars Climate Orbiter | The Mars Climate Orbiter (formerly the Mars Surveyor '98 Orbiter) was a robotic space probe launched by NASA on December 11, 1998 to study the Martian climate, Martian atmosphere, and surface changes and to act as the communications relay in the Mars Surveyor '98 program for Mars Polar Lander. However, on September 23, 1999, communication with the spacecraft was permanently lost as it went into orbital insertion. The spacecraft encountered Mars on a trajectory that brought it too close to the planet, and it was either destroyed in the atmosphere or escaped the planet's vicinity and entered an orbit around the Sun. An investigation attributed the failure to a measurement mismatch between two software systems: metric units by NASA and US Customary (imperial or "English") units by spacecraft builder Lockheed Martin.
Mission background
History
After the loss of Mars Observer and the onset of the rising costs associated with the future International Space Station, NASA began seeking less expensive, smaller probes for scientific interplanetary missions. In 1994, the Panel on Small Spacecraft Technology was established to set guidelines for future miniature spacecraft. The panel determined that the new line of miniature spacecraft should be under with highly focused instrumentation. In 1995, a new Mars Surveyor program began as a set of missions designed with limited objectives, low costs, and frequent launches. The first mission in the new program was Mars Global Surveyor, launched in 1996 to map Mars and provide geologic data using instruments intended for Mars Observer. Following Mars Global Surveyor, Mars Climate Orbiter carried two instruments, one originally intended for Mars Observer, to study the climate and weather of Mars.
The primary science objectives of the mission included:
determine the distribution of water on Mars
monitor the daily weather and atmospheric conditions
record changes on the Martian surface due to wind and other atmospheric effects
determine temperature profiles of the atmosphere
monitor the water vapor and dust content of the atmosphere
look for evidence of past climate change.
Spacecraft design
The Mars Climate Orbiter bus measured tall, wide and deep. The internal structure was largely constructed with graphite composite/aluminum honeycomb supports, a design found in many commercial airplanes. With the exception of the scientific instruments, battery and main engine, the spacecraft included dual redundancy on the most important systems.
The spacecraft was 3-axis stabilized and included eight hydrazine monopropellant thrusters: four thrusters to perform trajectory corrections and four thrusters to control attitude. Orientation of the spacecraft was determined by a star tracker, two sun sensors and two inertial measurement units. Orientation was controlled by firing the thrusters or using three reaction wheels. To perform the Mars orbital insertion maneuver, the spacecraft also included a LEROS 1B main engine rocket, providing of thrust by burning hydrazine fuel with nitrogen tetroxide (NTO) oxidizer.
The spacecraft included a high-gain antenna to transceive data with the Deep Space Network over the x band. The radio transponder designed for the Cassini–Huygens mission was used as a cost-saving measure. It also included a two-way UHF radio frequency system to relay communications with Mars Polar Lander upon an expected landing on December 3, 1999.
The space probe was powered with a 3-panel solar array, providing an average of at Mars. Deployed, the solar array measured in length. Power was stored in 12-cell, 16-amp-hour Nickel hydrogen batteries. The batteries were intended to be recharged when the solar array received sunlight and power the spacecraft as it passed into the shadow of Mars. When entering into orbit around Mars, the solar array was to be utilized in the aerobraking maneuver, to slow the spacecraft until a circular orbit was achieved. The design was largely adapted from guidelines from the Small Spacecraft Technology Initiative outlined in the book, Technology for Small Spacecraft.
In an effort to simplify previous implementations of computers on spacecraft, Mars Climate Orbiter featured a single computer using an IBM RAD6000 processor utilizing a POWER1 ISA capable of 5, 10 or 20 MHz operation. Data storage was to be maintained on 128 MB of random-access memory (RAM) and 18 MB of flash memory. The flash memory was intended to be used for highly important data, including triplicate copies of the flight system software.
Scientific instruments
The Pressure Modulated Infrared Radiometer (PMIRR) uses narrow-band radiometric channels and two pressure modulation cells to measure atmospheric and surface emissions in the thermal infrared and a visible channel to measure dust particles and condensates in the atmosphere and on the surface at varying longitudes and seasons. Its principal investigator was Daniel McCleese at JPL/CALTECH. Similar objectives were later achieved with Mars Climate Sounder on board Mars Reconnaissance Orbiter. Its objectives:
Map the three-dimensional and time-varying thermal structure of the atmosphere from the surface to 80 km altitude.
Map the atmospheric dust loading and its global, vertical and temporal variation.
Map the seasonal and spatial variation of the vertical distribution of atmospheric water vapor to an altitude of at least 35 km.
Distinguish between atmospheric condensates and map their spatial and temporal variation.
Map the seasonal and spatial variability of atmospheric pressure.
Monitor the polar radiation balance.
The Mars Color Imager (MARCI) is a two-camera (medium-angle/wide-angle) imaging system designed to obtain pictures of the Martian surface and atmosphere. Under proper conditions, resolutions up to are possible. The principal investigator on this project was Michael Malin at Malin Space Science Systems and the project was reincorporated on Mars Reconnaissance Orbiter. Its objectives:
Observe Martian atmospheric processes at global scale and synoptically.
Study details of the interaction of the atmosphere with the surface at a variety of scales in both space and time.
Examine surface features characteristic of the evolution of the Martian climate over time.
Mission profile
Launch and trajectory
The Mars Climate Orbiter probe was launched on December 11, 1998 at 18:45:51 UTC by the National Aeronautics and Space Administration from Space Launch Complex 17A at the Cape Canaveral Space Force Station in Florida, aboard a Delta II 7425 launch vehicle. The complete burn sequence lasted 42 minutes bringing the spacecraft into a Hohmann transfer orbit, sending the probe into a 9.5 months, trajectory. At launch, Mars Climate Orbiter weighed including propellant.
Encounter with Mars
Mars Climate Orbiter began the planned orbital insertion maneuver on September 23, 1999 at 09:00:46 UTC. Mars Climate Orbiter went out of radio contact when the spacecraft passed behind Mars at 09:04:52 UTC, 49 seconds earlier than expected, and communication was never reestablished. Due to complications arising from human error, the spacecraft encountered Mars at a lower than anticipated altitude and it was either destroyed in the atmosphere or re-entered heliocentric space after leaving Mars' atmosphere. Mars Reconnaissance Orbiter has since completed most of the intended objectives for this mission.
Cause of failure
On November 10, 1999, the Mars Climate Orbiter Mishap Investigation Board released a Phase I report, detailing the suspected issues encountered with the loss of the spacecraft.
Previously, on September 8, 1999, Trajectory Correction Maneuver-4 (TCM-4) was computed, and was then executed on September 15, 1999. It was intended to place the spacecraft at an optimal position for an orbital insertion maneuver that would bring the spacecraft around Mars at an altitude of on September 23, 1999.
However, during the week between TCM-4 and the orbital insertion maneuver, the navigation team reported that it appeared the insertion altitude could be much lower than planned, at about . Twenty-four hours prior to orbital insertion, calculations placed the orbiter at an altitude of . was the minimum altitude that Mars Climate Orbiter was thought to be capable of surviving during this maneuver.
During insertion, the orbiter was intended to skim through Mars' upper atmosphere, gradually aerobraking for weeks, but post-failure calculations showed that the spacecraft's trajectory would have taken it within of the surface. At this altitude, the spacecraft would likely have skipped violently off the denser-than-expected atmosphere, and it was either destroyed in the atmosphere, or re-entered heliocentric space.
The primary cause of this discrepancy was that one piece of ground software supplied by Lockheed Martin produced results in a United States customary unit, contrary to its Software Interface Specification (SIS), while a second system, supplied by NASA, expected those results to be in SI units, in accordance with the SIS. Specifically, software that calculated the total impulse produced by thruster firings produced results in pound-force seconds. The trajectory calculation software then used these results – expected to be in newton-seconds (incorrect by a factor of 4.45) – to update the predicted position of the spacecraft.
Still, NASA does not place the responsibility on Lockheed for the mission loss; instead, various officials at NASA have stated that NASA itself was at fault for failing to make the appropriate checks and tests that would have caught the discrepancy.
The discrepancy between calculated and measured position, resulting in the discrepancy between desired and actual orbit insertion altitude, had been noticed earlier by at least two navigators, whose concerns were dismissed because they "did not follow the rules about filling out [the] form to document their concerns". A meeting of trajectory software engineers, trajectory software operators (navigators), propulsion engineers, and managers was convened to consider the possibility of executing Trajectory Correction Maneuver-5, which was in the schedule. Attendees of the meeting recall an agreement to conduct TCM-5, but it was ultimately not done.
Project costs
According to NASA, the cost of the mission was $327.6 million ($ million in ) total for the orbiter and lander, comprising $193.1 million ($ million in ) for spacecraft development, $91.7 million ($ million in ) for launching it, and $42.8 million ($ million in ) for mission operations.
See also
List of missions to Mars
List of artificial objects on Mars
List of software bugs
Metrication
Notes
References
External links
Mars Surveyor '98 launch press kit
Mars Climate Orbiter arrival at Mars press kit
Mars Climate Orbiter Mission Profile by NASA's Solar System Exploration
NASA Space Science Data Coordinated Archive
Mars Climate Orbiter Mishap Investigation Board Phase I Report - November 10, 1999
Climate of Mars
Missions to Mars
Space accidents and incidents in the United States
NASA space probes
Lockheed Martin satellites and probes
Destroyed space probes
Metrication in the United States
Spacecraft launched in 1998
Spacecraft launched by Delta II rockets |
981857 | https://en.wikipedia.org/wiki/Ispell | Ispell | Ispell is a spelling checker for Unix that supports most Western languages. It offers several interfaces, including a programmatic interface for use by editors such as Emacs. Unlike GNU Aspell, ispell will only suggest corrections that are based on a Damerau–Levenshtein distance of 1; it will not attempt to guess more distant corrections based on English pronunciation rules.
Ispell has a very long history that can be traced back to a program that was originally written in 1971 in PDP-10 Assembly language by R. E. Gorin, and later ported to the C programming language and expanded by many others. It is currently maintained by Geoff Kuenning. The generalized affix description system introduced by ispell has since been imitated by other spelling checkers such as MySpell.
Like most computerized spelling checkers, ispell works by reading an input file word by word, stopping when a word is not found in its dictionary. Ispell then attempts to generate a list of possible corrections and presents the incorrect word and any suggestions to the user, who can then choose a correction, replace the word with a new one, leave it unchanged, or add it to the dictionary.
Ispell pioneered the idea of a programming interface, which was originally intended for use by Emacs. Other applications have since used the feature to add spell-checking to their own interface, and GNU Aspell has adopted the same interface so that it can be used with the same set of applications.
There are ispell dictionaries for most widely spoken Western languages.
Ispell is available under a specific open-source license.
See also
Hunspell
MySpell
Pspell
External links
References
Original unix spell, on which Ispell is based
Spell checkers
Free spelling checking programs
Language software for Linux
Unix software |
5510258 | https://en.wikipedia.org/wiki/Jesse%20Shera | Jesse Shera | Jesse Hauk Shera (December 8, 1903 – March 8, 1982) was an American librarian and information scientist who pioneered the use of information technology in libraries and played a role in the expansion of its use in other areas throughout the 1950s, 60s, and 70s.
He was born in Oxford, Ohio on December 8, 1903, the only child of parents Charles, and Jessie Shera. His hometown of Oxford was a farming community and the home of Miami University. Shera went to William McGuffey High School, and graduated in 1921. While attending high school he played the drums in the school band, was a member of the debate team, a cheerleader, and he was the senior class president. He lived in Oxford until after he obtained his undergraduate degree from Miami University. In 1925 Miami University awarded Shera with a B.A. in English with honors. Shera later went on to earn a master's degree in English literature from Yale University in 1927 and a Doctorate in library science from the University of Chicago in 1944, advised by Louis Round Wilson with Pierce Butler on his committee.
Shera suffered from strabismus throughout his life.
Career
In 1928, Shera returned to Miami University and took a temporary job in the library as an assistant cataloguer and later in the year took a job as a research associate and bibliographer with the Scripps Foundation for Research in Population Problems. He remained a part of this project through 1938. Shera hoped to become a college English teacher but never succeeded due to the depression and a lack of available teaching positions in colleges and universities. “In what he has called ‘an act of desperation on my part which the library profession has lived to regret,’ he decided to make librarianship his career.”
In the thirties, Shera was trying to convince the ALA Bulletin to be a more serious journal, and for librarians to be more careful and precise in how they answered patron questions. In short, he was concerned with their level of professionalism. At that particular time, there was no “professional creed”, and this upset him, also.
He studied and wrote on the history and philosophy of libraries often, and considered the work of libraries to be one of humanistic endeavor.
As early as 1935, he was suggesting that college libraries should develop collective purchasing and interlibrary loan systems. In addition, he suggested using microforms for the same purposes that services like Lexis Nexis would eventually be created to perform cooperative cataloging, and reference.
From the very beginning of his career, Shera seemed to be entirely comfortable with whatever type of controversy came to hand. On librarian "neutrality", Shera warned in a 1935 address to the College and University Section of the American Library Association
“ … Today we can ill afford to stand mutely behind our circulation desks, calmly handing out reserved books at the beck and call of an endless stream of students, blandly reaffirming our convictions of our own “academic detachment.” We may be rudely awakened some morning with the realization that we are the hapless and unwilling guardians of the propaganda of a fascist ‘’regime’’.”
In 1940, Shera accepted an appointment with the Library of Congress as chief of the census library project. The next year he transferred to the Office of Strategic Services, where he was deputy chief of the central information division of the research and analysis branch. In 1944 the same year Shera obtained his Doctorate in library science, he was named the associate director of libraries for the University of Chicago. Throughout his time in this position Shera was the head of the preparations division, and then of readers’ services. He became a member of the University of Chicago Graduate Library School (GLS) faculty as an assistant professor in 1947. Four years later he was promoted to associate professor. In 1949 Shera’s first book Foundations for the Public Library; The Origins of the Public Library Movement in New England, 1629–1855, was published by the University of Chicago Press. This book is generally accepted as a classic discussion of the social factors contributing to the emergence of tax-supported public libraries.
From 1950–1952, Shera was the chairman of the American Library Association’s committee on bibliography. In 1952 Shera became dean of the library school of Western Reserve University, expanding its faculty and adding a doctoral program within a few years. Under his leadership, the library school at Western Reserve became a leading contributor to the automation of libraries over the next three decades. According to an excerpt from the Saturday Review (December 1, 1956) found in the Current Biography, Shera suggests that “through the use of many machines we are at the beginning of a new era: an age which may bring quite unheard of ways for the more effective communication of knowledge”.
Also in 1952, Shera took over as head of the American Documentation Institute (ADI) (which continues as the Association for Information Science & Technology). Prior to 1952, the ADI had been focused on refining the use of microfilm for the preservation and organization of documents; Shera turned its attention to applications of information technology. In 1955 Shera teamed with James W. Perry and Alan Kent to found the Center for Documentation and Communication Research (CDCR), which advised industry, government and higher education on information systems. This center was the first of its kind to be associated with any library school, and became a resource for the research into new areas of education for library schools.
In the 1960s, Shera designed a proposal for his project of "Social Epistemology", building on the work of Douglas Waples of the Graduate Library School at Chicago. Waples dealt with social effects of reading, and asked the basic questions of the new discipline that Shera named social epistemology. This new discipline is a study of the ways in which society can access and perceive its environment or information. It can also provide a framework for the production, flow, integration, and consumption of information throughout society. One of the most practical applications of social epistemology is in the library profession. A librarian aims to be an efficient mediator between man and his access to recorded knowledge. Tools to achieve this goal are classification schemes, subject headings, indexes, and other devices for the subject analysis of bibliographic units.
In 1963–1964 Shera was the president of the Ohio Library Association. From 1964–1965 Shera served as president of the Association of American Library Schools (currently the Association for Library and Information Science Education). He was a member of the Information Science and Automation Division of ALA (currently the Library and Information Technology Association), where he served as president from 1971-1972.
He wrote and spoke about every type of librarianship from public to special and the history thereof. Of special interest to him was the effect that modern culture has had in the shaping of the modern library and the effect that libraries have had on their host societies in turn.
Shera wrote numerous books and articles and served as the editor of a number of library and information science related journals. Between 1947 and 1952 Shera was an associate editor for Library Quarterly, and from 1952 to 1955 he served as an advisory editor. Shera was also an editor for American Documentation from 1953 to 1959. He was an advisory editor of the Journal of Cataloging and Classification from 1947 to 1957. He also served as editor of the Western Reserve University Press from 1954 to 1957.
Despite his work in advancing information science and the use of information technology in library contexts, throughout his career he was a consistent believer in the importance of sociological and humanistic aspects to librarianship and information organization. Late in his career he came to believe that the "human side" of librarianship and information work in general faced a danger of being overshadowed by attention to technical matters as the information explosion of the 1980s began to take shape.
Over the course of his life, Shera touched every aspect of Library science. He championed technology and stated “that the computer would revolutionize libraries” but urged careful use of it, rather than subservience to it. Shera saw the potential for technology in library science. “He tried to build information retrieval systems yet at the same time was a sober and sharp critic of the faddists, commercial hucksters, and techie boosters who would and often did take us down expensive and obscure roads on our way to the future.” At times, his articles almost seem to push entirely in one direction or the other, but taken as a whole he was fairly evenhanded. Proponents on both sides of the technology debate claimed him as their own, but he didn't seem to have any affinity for either extreme camp until at least the mid-seventies. Shera wrote of the progress made over the last century in 1976, in an article he wrote for the Library Journal entitled “Failure and Success: Assessing a Century”. This article can be summarized in saying that the new technology is leading librarians to analyze more thoroughly, and makes them ask if that is something that should be done. Shera states that it can be beneficial to librarianship so long as machines and the demands of machines are not allowed to determine the character of and the limitations upon our professional services. This technology is a great opportunity, but it is important to keep Shera’s advice in mind to not allow it to define the profession. This theme is repeated over and over across several years: ‘’Embrace the technology but do not become its servant’’.
He was elected as a Fellow of the American Association for the Advancement of Science shortly before his death on March 8, 1982, aged 78.
Many of his books are actually compilations of essays or presentations, but there are a fair number of text books scattered through his life’s work.
JESSE, the primary email discussion list used by library and information science educators, is named in honor of Jesse Shera.
The American Library Association offers two awards in Shera’s name: the Jesse H. Shera Award for Distinguished Published Research, and the Jesse H. Shera Award for the Support of Dissertation Research. The first of these awards is given for research articles published in English during the calendar year, nominated by any member of Library Research Round Table (LRRT) or by the editors of research journals in the field of library and information studies. The second award is given to provide recognition and monetary support for dissertation research employing exemplary research design and methods.
Books by Jesse Shera
Introduction to library science: basic elements of library service. Littleton, Colo.: Libraries Unlimited, 1976
Knowing books and men; knowing computers too. Littleton, Colo., Libraries Unlimited, 1973
The foundations of education for librarianship. New York, Becker and Hayes 1972
"The complete librarian"; and other essays. Cleveland, Press of Western Reserve University, 1971, 1979
Sociological foundations of librarianship. New York, Asia Pub. House 1970
Documentation and the organization of knowledge. Hamden, Conn., Archon Books, 1966
Libraries and the organization of knowledge. London, C. Lockwood 1965
An epistemological foundation for library science. Cleveland, Press of Western Reserve University, 1965
Information resources: a challenge to American science and industry. Cleveland, Press of Western Reserve Univ. 1958
The classified catalog: basic principles and practices. Chicago, American Library Association, 1956
Documentation in action / Jesse H. Shera, Allen Kent, James W. Perry [editors]. New York : Reinhold Publishing Corp., 1956.
Historians, books and libraries: a survey of historical scholarship in relation to library resources, organization and services. Cleveland, Press of Western Reserve University, 1953Bibliographic organization. Chicago, University of Chicago Press, 1951Foundations of the public library: the origins of the public library movement in New England, 1629–1855. Chicago : University of Chicago Press, 1952, 1949An eddy in the western flow of America culture. Ohio state archæological and historical quarterly. --Columbus, O., 1935.The age factor in employment, a classified bibliography, by J.H. Shera ... Bulletin of bibliography and dramatic index. --Boston : Boston Book Co., 1931-32.
References
Further reading
H. Curtis Wright. Jesse Shera, librarianship and information science. Provo, Utah : School of Library and Information Sciences, Brigham Young University (1988)
John V. Richardson Jr., The Spirit of Inquiry: The Graduate Library School at Chicago, 1921-1951. Foreword by Jesse Shera. Chicago: American Library Association, 1982.
John V. Richardson Jr., The Gospel of Scholarship: Pierce Butler and A Critique of American Librarianship. Metuchen, NJ: Scarecrow Press, 1992. xv, 350 pp.
Shera, J. H., & Rawski, C. H. (1973). Toward a theory of librarianship: Papers in honor of Jesse Hauk Shera.'' Metuchen, N.J: Scarecrow Press.
1903 births
1982 deaths
American librarians
Miami University alumni
People from Oxford, Ohio
University of Chicago Graduate Library School alumni
University of Chicago faculty
Yale University alumni
Library science scholars |
15136570 | https://en.wikipedia.org/wiki/Software%20testing%20controversies | Software testing controversies | There is considerable variety among software testing writers and consultants about what constitutes responsible software testing. Prominent members of the Context-Driven School of Testing consider much of the writing about software testing to be doctrine, mythology, and folklore. Some contend that this belief directly contradicts standards such as the IEEE 829 test documentation standard, and organizations such as the Food and Drug Administration who promote them. The Context-Driven School's retort is that Lessons Learned in Software Testing includes one lesson supporting the use IEEE 829 and another opposing it; that not all software testing occurs in a regulated environment and that practices appropriate for such environments would be ruinously expensive, unnecessary, and inappropriate for other contexts; and that in any case the FDA generally promotes the principle of the least burdensome approach.
Some of the major controversies include:
Best practices
Many members of the Context-Driven School of Testing believe that there are no best practices of testing, but rather that testing is a set of skills that allow the tester to select or invent testing practices to suit each unique situation. James Bach wrote "...there is no practice that is better than all other possible practices, regardless of the context." However, some testing practitioners do not see an issue with the concept of "best practices" and do not believe that term implies that a practice is universally applicable.
Agile vs. traditional
Starting around 1990, a new style of writing about testing began to challenge what had come before. The seminal work in this regard is widely considered to be Testing Computer Software, by Cem Kaner. Instead of assuming that testers have full access to source code and complete specifications, these writers, including Kaner and James Bach, argued that testers must learn to work under conditions of uncertainty and constant change. Meanwhile, an opposing trend toward process "maturity" also gained ground, in the form of the Capability Maturity Model. The agile testing movement (which includes but is not limited to forms of testing practiced on agile development projects) has popularity mainly in commercial circles, whereas the CMM was embraced by government and military software providers.
However, saying that "maturity models" like CMM gained ground against or opposing Agile testing may not be right. Agile movement is a 'way of working', while CMM is a process improvement idea.
But another point of view must be considered: the operational culture of an organization. While it may be true that testers must have an ability to work in a world of uncertainty, it is also true that their flexibility must have direction. In many cases test cultures are self-directed and as a result fruitless, unproductive results can ensue. Furthermore, providing positive evidence of defects may either indicate that you have found the tip of a much larger problem, or that you have exhausted all possibilities. A framework is a test of Testing. It provides a boundary that can measure (validate) the capacity of our work. Both sides have, and will continue to argue the virtues of their work. The proof however is in each and every assessment of delivery quality. It does little good to test systematically if you are too narrowly focused. On the other hand, finding a bunch of errors is not an indicator that Agile methods was the driving force; you may simply have stumbled upon an obviously poor piece of work.
Exploratory vs. scripted
Exploratory testing means simultaneous test design and test execution with an emphasis on learning. Scripted testing means that learning and test design happen prior to test execution, and quite often the learning has to be done again during test execution. Exploratory testing is very common, but in most writing and training about testing it is barely mentioned and generally misunderstood. Some writers consider it a primary and essential practice. Structured exploratory testing is a compromise when the testers are familiar with the software. A vague test plan, known as a test charter, is written up, describing what functionalities need to be tested but not how, allowing the individual testers to choose the method and steps of testing.
There are two main disadvantages associated with a primarily exploratory testing approach. The first is that there is no opportunity to prevent defects, which can happen when the designing of tests in advance serves as a form of structured static testing that often reveals problems in system requirements and design. The second is that, even with test charters, demonstrating test coverage and achieving repeatability of tests using a purely exploratory testing approach is difficult. For this reason, a blended approach of scripted and exploratory testing is often used to reap the benefits while mitigating each approach's disadvantages.
Manual vs. automated
Some writers believe that test automation is so expensive relative to its value that it should be used sparingly. Others, such as advocates of agile development, recommend automating 100% of all tests. A challenge with automation is that automated testing requires automated test oracles (an oracle is a mechanism or principle by which a problem in the software can be recognized). Such tools have value in load testing software (by signing on to an application with hundreds or thousands of instances simultaneously), or in checking for intermittent errors in software. The success of automated software testing depends on complete and comprehensive test planning. Software development strategies such as test-driven development are highly compatible with the idea of devoting a large part of an organization's testing resources to automated testing. Many large software organizations perform automated testing. Some have developed their own automated testing environments specifically for internal development, and not for resale.
Software design vs. software implementation
Ideally, software testers should not be limited only to testing software implementation, but also to testing software design. With this assumption, the role and involvement of testers will change dramatically. In such an environment, the test cycle will change too. To test software design, testers would review requirement and design specifications together with designer and programmer, potentially helping to identify bugs earlier in software development.
Who watches the watchmen?
One principle in software testing is summed up by the classical Latin question posed by Juvenal:
Quis Custodiet Ipsos Custodes (Who watches the watchmen?), or is alternatively referred
informally, as the "Heisenbug" concept (a common misconception that confuses Heisenberg's uncertainty principle with observer effect). The idea is that any form of observation is also an interaction, that the act of testing can also affect that which is being tested.
In practical terms, the test engineer is testing software (and sometimes hardware or firmware) with other software (and hardware and firmware). The process can fail in ways that are not the result of defects in the target but rather result from defects in (or indeed intended features of) the testing tool.
There are metrics being developed to measure the effectiveness of testing. One method is by analyzing code coverage (this is highly controversial) - where everyone can agree what areas are not being covered at all and try to improve coverage in these areas.
Bugs can also be placed into code on purpose, and the number of bugs that have not been found can be predicted based on the percentage of intentionally placed bugs that were found. The problem is that it assumes that the intentional bugs are the same type of bug as the unintentional ones.
Finally, there is the analysis of historical find-rates. By measuring how many bugs are found and comparing them to predicted numbers (based on past experience with similar projects), certain assumptions regarding the effectiveness of testing can be made. While not an absolute measurement of quality, if a project is halfway complete and there have been no defects found, then changes may be needed to the procedures being employed by QA.
References
Software testing |
31801010 | https://en.wikipedia.org/wiki/Ksar%20%28Unix%20sar%20grapher%29 | Ksar (Unix sar grapher) | Ksar is a BSD-licensed Java-based application that creates graphs of all parameters from data collected by Unix sar utilities. Usually, Unix sar is part of Unix' Sysstat package, and runs sa1, sa2, and sadc through cron to created data files in /var/log/sa/saNN. Characteristics include:
Images can be zoomed by dragging the mouse on an image to pinpoint problems
Results can be exported to PDF or JPEG format
Syntax and options
Below is the list of CLI options supported by Ksar. Ksar's -help option will list all supported options of the applicable Ksar version.
$ java -jar kSar.jar -help
kSar version: 5.0.6
Usage:
-version: show kSar version number
-help: show this help
-[[input (computing)|input]] <arg>: [[Argument (computer programming)|argument]] must be either ssh://user@host/command or cmd://command or file://path/to/file or just /path/to/file
-graph <graph list>: space separated list of graph to be output
-showCPUstacked: will make the [[CPU]] used graph as stacked
-showMEMstacked: will make the memory graph as stacked ([[Linux]] only)
-cpuFixedAxis: will graph CPU used with fixed axis from 0% to 100%
-showIntrListstacked : will make the Interrupt List graph as stacked
-showTrigger: will show trigger on graph (disabled by [[Default (computer science)|default]])
-noEmptyDisk: will not export disk with no data
-tile: will tile [[Window (computing)|window]]
-userPrefs: will use the userPrefs for outputting graphs (last export of this host)
-showOnlygraphName: will only print graph name available for that data (to be use for -graph)
-addHTML: will create an [[HTML]] page with [[PNG]]/[[JPG]] image
-outputPDF <pdf file> : output the pdf report to the pdf file
-outputPNG <base filename> : output the graphs to PNG file using argument as base filename
-outputJPG <base filename> : output the graphs to JPG file using argument as base filename
-outputCSV <CSV file> : output the [[CSV]] file
-width <size> : make JPG/PNG with specified width size (default: 800)
-height <size> : make JPG/PNG with specified height size (default: 600)
-startdate <date> : will graph the range beginning at that time
-enddate <date> : will graph the range until that date
-solarisPagesize <pagesize in B>: will set [[solaris]] pagesize
-[[Wizard (computing)|wizard]]: open with unified [[login]] popup
-replaceShortcut <xml file>: replace all [[shortcut (computing)|shortcuts]] with those in the [[.xml]] file
-addShortcut <xml file>: add shortcut from the xml file
-startup: open window marked for opening at [[booting|startup]]
Generating SAR Text Files for Ksar Use
To begin gathering sysstat history information for use of the sar command, systat should be configured to run through cron (preferably every minute). More instructions are available on the systat web site.
Generating sar text file with all system resources information
DT="10"
LC_ALL=C sar -A -f /var/log/sa/sa$DT > /tmp/sar-$(hostname)-$DT.txt
ls -l /tmp/sar-$(hostname)-$DT.txt
Generating only disk information from a sar data file
(Note that sar will collect disk information only if sadc is running with -d option thru cron)
DT="10"
LC_ALL=C sar -d -p -f /var/log/sa/sa$DT > /tmp/sar-$(hostname)-$DT.txt
ls -l /tmp/sar-$(hostname)-$DT.txt
Generating a text file for multiple days
DT="12 13 14"
>/tmp/sar-$(hostname)-multiple.txt
for i in $DT; do
LC_ALL=C sar -A -f /var/log/sa/sa$i >> /tmp/sar-$(hostname)-multiple.txt
done
ls -l /tmp/sar-$(hostname)-multiple.txt
For getting all the days in the default folder, you can replace the harcoded DT variable with:
DT=$(ls /var/log/sa/sa[0-9][0-9] | sed 's_/var/log/sa/sa_ _g' | xargs)
See also
Sar (Unix)
External links
How to use ksar - cyberciti article
Sourceforge Ksar Project Page
Job scheduling
Computer performance
System administration
BSD software |
3825820 | https://en.wikipedia.org/wiki/Distributed%20Replicated%20Block%20Device | Distributed Replicated Block Device | DRBD is a distributed replicated storage system for the Linux platform. It is implemented as a kernel driver, several userspace management applications, and some shell scripts. DRBD is traditionally used in high availability (HA) computer clusters, but beginning with DRBD version 9, it can also be used to create larger software defined storage pools with a focus on cloud integration.
A DRBD device is a DRBD block device that refers to a logical block device in a logical volume schema.
The DRBD software is free software released under the terms of the GNU General Public License version 2.
DRBD is part of the Lisog open source stack initiative.
Mode of operation
DRBD layers logical block devices (conventionally named /dev/drbdX, where X is the device minor number) over existing local block devices on participating cluster nodes. Writes to the primary node are transferred to the lower-level block device and simultaneously propagated to the secondary node(s). The secondary node(s) then transfers data to its corresponding lower-level block device. All read I/O is performed locally unless read-balancing is configured.
Should the primary node fail, a cluster management process promotes the secondary node to a primary state. This transition may require a subsequent verification of the integrity of the file system stacked on top of DRBD, by way of a filesystem check or a journal replay. When the failed ex-primary node returns, the system may (or may not) raise it to primary level again, after device data resynchronization. DRBD's synchronization algorithm is efficient in the sense that only those blocks that were changed during the outage must be resynchronized, rather than the device in its entirety.
DRBD is often deployed together with the Pacemaker or Heartbeat cluster resource managers, although it does integrate with other cluster management frameworks. It integrates with virtualization solutions such as Xen, and may be used both below and on top of the Linux LVM stack.
DRBD allows for load-balancing configurations, allowing both nodes to access a particular DRBD in read/write mode with shared storage semantics. A multiple primary (multiple read/write nodes) configuration requires the use of a distributed lock manager.
Shared cluster storage comparison
Conventional computer cluster systems typically use some sort of shared storage for data being used by cluster resources. This approach has a number of disadvantages, which DRBD may help offset:
Shared storage resources must typically be accessed over a storage area network or on a network attached storage server, which creates some overhead in read I/O. In DRBD that overhead is reduced as all read operations are carried out locally.
Shared storage is usually expensive and consumes more space (2U and more) and power. DRBD allows for an HA setup with only 2 machines.
Shared storage is not necessarily highly available. For example, a single storage area network accessed by multiple virtualization hosts is considered shared storage, but is not considered highly available at the storage level - if that single storage area network fails, neither host within the cluster can access the shared storage. DRBD allows for a storage target that is both shared and highly available.
A disadvantage is the lower time required to write directly to a shared storage device than to route the write through the other node.
Comparison to RAID-1
DRBD bears a superficial similarity to RAID-1 in that it involves a copy of data on two storage devices, such that if one fails, the data on the other can be used. However, it operates in a very different way from RAID and even network RAID.
In RAID, the redundancy exists in a layer transparent to the storage-using application. While there are two storage devices, there is only one instance of the application and the application is not aware of multiple copies. When the application reads, the RAID layer chooses the storage device to read. When a storage device fails, the RAID layer chooses to read the other, without the application instance knowing of the failure.
In contrast, with DRBD there are two instances of the application, and each can read only from one of the two storage devices. Should one storage device fail, the application instance tied to that device can no longer read the data. Consequently, in that case that application instance shuts down and the other application instance, tied to the surviving copy of the data, takes over.
Conversely, in RAID, if the single application instance fails, the information on the two storage devices is effectively unusable, but in DRBD, the other application instance can take over.
Applications
Operating within the Linux kernel's block layer, DRBD is essentially workload agnostic. A DRBD can be used as the basis of:
A conventional file system (this is the canonical example),
a shared disk file system such as GFS2 or OCFS2,
another logical block device (as used in LVM, for example),
any application requiring direct access to a block device.
DRBD-based clusters are often employed for adding synchronous replication and high availability to file servers, relational databases (such as MySQL), and many other workloads.
Inclusion in Linux kernel
DRBD's authors originally submitted the software to the Linux kernel community in July 2007, for possible inclusion in the canonical kernel.org version of the Linux kernel. After a lengthy review and several discussions, Linus Torvalds agreed to have DRBD as part of the official Linux kernel. DRBD was merged on 8 December 2009 during the "merge window" for Linux kernel version 2.6.33.
See also
Highly Available Storage
High-availability cluster
Disk mirroring
References
External links
LINBIT
High-Availability Linux project web site
Storage software
Virtualization-related software for Linux |
12881004 | https://en.wikipedia.org/wiki/Earl%20A.%20Pace%20Jr. | Earl A. Pace Jr. | Earl A. Pace Jr. is an American businessman, computer scientist, and activist. He was the co-founder of Black Data Processing Associates (BDPA) in 1975.
Career
Earl A. Pace Jr. is a graduate of Pennsylvania State University, and pursued graduate studies at Temple University in Philadelphia. Pace began his career in information technology as a computer programmer trainee at the Pennsylvania Railroad (PRR) in 1965, where he remained until 1967.
Over the next ten years, he worked as a programmer, programmer analyst, programming manager and as vice president of a financial telecommunications company in Philadelphia, Pennsylvania. In 1976, he incorporated Pace Data Systems, of which he is still president. Pace Data Systems, Inc. is a full-service information technology firm providing services through its Philadelphia, Pennsylvania, and Washington, D.C. offices, primarily to banks.
In 1975, he co-founded Black Data Processing Associates in Philadelphia and operated as its president for two years. In 1978, he coordinated the formation of BDPA into a national organization and served as its first national president until 1980. Black Data Processing Associates has grown into the largest national professional organization representing blacks in the information technology industry.
Pace is active in the business and education communities of Philadelphia, Washington, Baltimore, and other cities, where he makes presentations on topics of interest to IT professionals.
Awards
In 1997, he received the National Technical Association's National Technical Achiever Award as Computer Scientist of the Year.
In 2001 and 2002, Black Money magazine named him as one of the 50 Most Influential African Americans in Information Technology.
In 2011 CompTIA honored him by inducting him into the IT Hall of Fame as an innovator for co-founding the Black Data Processing Associates.
References
External links
BETF
Year of birth missing (living people)
Living people
American computer businesspeople
Pennsylvania State University alumni
Temple University alumni
Data activism |
47210 | https://en.wikipedia.org/wiki/Vesta%20%28mythology%29 | Vesta (mythology) | Vesta () is the virgin goddess of the hearth, home, and family in Roman religion. She was rarely depicted in human form, and was often represented by the fire of her temple in the Forum Romanum. Entry to her temple was permitted only to her priestesses, the Vestals, who tended the sacred fire at the hearth in her temple. As she was considered a guardian of the Roman people, her festival, the Vestalia (7–15 June), was regarded as one of the most important Roman holidays. During the Vestalia matrons walked barefoot through the city to the sanctuary of the goddess, where they presented offerings of food. Such was Vesta's importance to Roman religion that hers was one of the last republican pagan cults still active following the rise of Christianity until it was forcibly disbanded by the Christian emperor Theodosius I in AD 391.
The myths depicting Vesta and her priestesses were few, and were limited to tales of miraculous impregnation by a phallus appearing in the flames of the hearth—the manifestation of the goddess. Vesta was among the Dii Consentes, twelve of the most honored gods in the Roman pantheon. She was the daughter of Saturn and Ops, and sister of Jupiter, Neptune, Pluto, Juno, and Ceres. Her Greek equivalent is Hestia.
Etymology
Ovid derived Vesta from Latin – "standing by power". Cicero supposed that the Latin name Vesta derives from the Greek Hestia, which Cornutus claimed to have derived from Greek ("standing for ever"). This etymology is offered by Servius as well. Another etymology is that Vesta derives from Latin ("clothe"), as well as from Greek ("hearth" = focus urbis). None, except perhaps the last, are probable.
Georges Dumézil (1898–1986), a French comparative philologist, surmised that the name of the goddess derives from Proto-Indo-European root *h₁eu-, via the derivative form *h₁eu-s- which alternates with *h₁w-es-. The former is found in Greek εὕειν , Latin , and Vedic osathi all conveying 'burning' and the second is found in Vesta. (Greek goddess-name Ἑστία Hestia is probably unrelated). See also Gallic Celtic visc "fire."
Poultney suggests that Vesta may be related to the Umbrian god Uestisier (gen.)/Vestiçe (dat.) (as if Latin *Vesticius), itself related to Umbrian terms for 'libation' uestisiar (gen.sg.), 'pour a libation' uesticatu (imv.) from *westikia and *westikato:d respectively. Perhaps also related to Oscan Veskeí from the Oscan Tablet also known as the Agnone Dedication.
History
Origin
According to tradition, worship of Vesta in Italy began in Lavinium, the mother-city of Alba Longa and the first Trojan settlement. From Lavinium worship of Vesta was transferred to Alba Longa. Upon entering higher office, Roman magistrates would go to Lavinium to offer sacrifice to Vesta and the household gods the Romans called Penates. The Penates were Trojan gods first introduced to Italy by Aeneas. Alongside those household gods was Vesta, who has been referred to as Vesta Iliaca (Vesta of Troy), with her sacred hearth being named Ilaci foci (Trojan hearth).
Worship of Vesta, like the worship of many gods, originated in the home, but became an established cult during the reign of either Romulus, or Numa Pompilius (sources disagree, but most say Numa). The priestesses of Vesta, known as Vestal Virgins, administered her temple and watched the eternal fire. Their existence in Alba Longa is connected with the early Roman traditions, for Romulus' mother Silvia was a priestess.
Roman Empire
Roman tradition required that the leading priest of the Roman state, the pontifex maximus reside in a domus publicus ("publicly owned house"). After assuming the office of pontifex maximus in 12 BC, Augustus gave part of his private house to the Vestals as public property and incorporated a new shrine of Vesta within it. The old shrine remained in the Forum Romanum'''s temple of Vesta, but Augustus' gift linked the public hearth of the state with the official home of the pontifex maximus and the emperor's Palatine residence. This strengthened the connection between the office of pontifex maximus and the cult of Vesta. Henceforth, the office of pontifex maximus was tied to the title of emperor; Emperors were automatically priests of Vesta, and the pontifices were sometimes referred to as pontifices Vestae ("priests of Vesta"). In 12 BC, 28 April (first of the five day Floralia) was chosen ex senatus consultum to commemorate the new shrine of Vesta in Augustus' home on the Palatine. The latter's hearth was the focus of the Imperial household's traditional religious observances. Various emperors led official revivals and promotions of the Vestals' cult, which in its various locations remained central to Rome's ancient traditional cults into the 4th century. Dedications in the Atrium of Vesta, dating predominantly AD 200 to 300, attest to the service of several Virgines Vestales Maxime. Vesta's worship began to decline with the rise of Christianity. In ca. 379, Gratian stepped down as pontifex maximus; in 382 he confiscated the Atrium Vestae; simultaneously, he withdrew its public funding. In 391, despite official and public protests, Theodosius I closed the temple, and extinguished the sacred flame. Finally, Coelia Concordia stepped down as the last Vestalis Maxima ("chief Vestal") in 394.
Depictions
Depicted as a good-mannered deity who never involved herself in the quarreling of other gods, Vesta was ambiguous at times due to her contradictory association with the phallus. She is considered the embodiment of the "Phallic Mother" by proponents of 20th Century psychoanalysis: she was not only the most virgin and clean of all the gods, but was addressed as mother and granted fertility. Mythographers tell us that Vesta had no myths save being identified as one of the oldest of the gods who was entitled to preference in veneration and offerings over all other gods. Unlike most gods, Vesta was hardly depicted directly; nonetheless, she was symbolized by her flame, the fire stick, and a ritual phallus (the fascinus).
While Vesta was the flame itself, the symbol of the phallus might relate to Vesta's function in fertility cults, but it maybe also invoked the goddess herself due to its relation to the fire stick used to light the sacred flame. She was sometimes thought of as a personification of the fire stick which was inserted into a hollow piece of wood and rotated – in a phallic manner – to light her flame.
Hearth
Concerning the status of Vesta's hearth, Dionysius of Halicarnassus had this to say: "And they regard the fire as consecrated to Vesta, because that goddess, being the Earth and occupying the central position in the universe, kindles the celestial fires from herself." Ovid agreed, saying: "Vesta is the same as the earth, both have the perennial fire: the Earth and the sacred Fire are both symbolic of home." The sacred flames of the hearth were believed to be indispensable for the preservation and continuity of the Roman State: Cicero states it explicitly. The purity of the flames symbolised the vital force that is the root of the life of the community. It was also because the virgins' ritual concern extended to the agricultural cycle and ensured a good harvest that Vesta enjoyed the title of Mater ("Mother").
The fecundating power of sacred fire is testified in Plutarch's version of the birth of Romulus, the birth of king Servius Tullius (in which his mother Ocresia becomes pregnant after sitting upon a phallus that appeared among the ashes of the ara of god Vulcanus, by order of Tanaquil wife of king Tarquinius Priscus) and the birth of Caeculus, the founder of Praeneste. All these mythical or semilegendary characters show a mystical mastery of fire, e.g., Servius's hair was kindled by his father without hurting him, his statue in the temple of Fortuna Primigenia was unharmed by fire after his assassination. Caeculus kindled and extinguished fires at will.
Marriage
Vesta was connected to liminality, and the limen ("threshold") was sacred to her: brides were careful not to step on it, else they commit sacrilege by kicking a sacred object. Servius explains that it would be poor judgement for a virgin bride to kick an object sacred to Vesta – a goddess that holds chastity sacred. On the other hand, it might merely have been because Romans considered it bad luck to trample any object sacred to the gods. In Plautus' Casina, the bride Casina is cautioned to lift her feet carefully over the threshold following her wedding so she would have the upper hand in her marriage. Likewise, Catullus cautions a bride to keep her feet over the threshold "with a good omen".
In Roman belief, Vesta was present in all weddings, and so was Janus: Vesta was the threshold and Janus the doorway. Similarly, Vesta and Janus were invoked in every sacrifice. It has been noted that because they were invoked so often, the evocation of the two came to simply mean, "to pray". In addition, Vesta was present with Janus in all sacrifices as well.Servius (Ad. Aen. 1.292) It has also been noted that neither of them were consistently illustrated as human. This has been suggested as evidence of their ancient Italic origin, because neither of them were "fully anthropomorphized"
Agriculture
Counted among the agricultural deities, Vesta has been linked to the deities Tellus and Terra in separate accounts. In Antiquitates rerum humanarum et divinarum, Varro links Vesta to Tellus. He says: "They think Tellus... is Vesta, because she is 'vested' in flowers". Verrius Flaccus, however, had identified Vesta with Terra. Ovid hints at Vesta's connection to both of the deities.
Temple
Where the majority of temples would have a statue, that of Vesta had a hearth. The fire was a religious center of Roman worship, the common hearth (focus publicus) of the whole Roman people. The Vestals were obliged to keep the sacred fire alight. If the fire went out, it must be lit from an arbor felix, auspicious tree, (probably an oak). Water was not allowed into the inner aedes nor could stay longer than strictly needed on the nearby premises. It was carried by the Vestales in vessels called futiles which had a tiny foot that made them unstable.
The temple of Vesta held not only the ignes aeternum ("sacred fire"), but the Palladium of Pallas Athena and the di Penates as well. Both of these items are said to have been brought into Italy by Aeneas. The Palladium of Athena was, in the words of Livy: "fatale pignus imperii Romani" ("[a] pledge of destiny for the Roman empire"). Such was the Palladium's importance, that when the Gauls sacked Rome in 390 BC, the Vestals first buried the Palladium before removing themselves to the safety of nearby Caere. Such objects were kept in the penus Vestae (i.e. the sacred repository of the temple of Vesta).
Despite being one of the most spiritual of Roman Shrines, that of Vesta was not a templum in the Roman sense of the word; that is, it was not a building consecrated by the augurs and so it could not be used for meetings by Roman officials. It has been claimed that the shrine of Vesta in Rome was not a templum, because of its round shape. However, a templum was not a building, but rather a sacred space that could contain a building of either rectangular or circular shape. In fact, early templa were often altars that were consecrated and later had buildings erected around them. The temple of Vesta in Rome was an aedes and not a templum, because of the character of the cult of Vesta – the exact reason being unknown.
Vestal Virgins
The Vestales were one of the few full-time clergy positions in Roman religion. They were drawn from the patrician class and had to observe absolute chastity for 30 years. It was from this that the Vestales were named the Vestal virgins. They wore a particular style of dress and they were not allowed to let the fire go out, on pain of a whipping. The Vestal Virgins lived together in a house near the Forum (Atrium Vestae), supervised by the Pontifex Maximus. On becoming a priestess, a Vestal Virgin was legally emancipated from her father's authority and swore a vow of chastity for 30 years.Dion. Hal. 2,67,2 A Vestal who broke this vow could be tried for incestum and if found guilty, buried alive in the Campus Sceleris ('Field of Wickedness').Plut. Numa 10, 4
The (lanas: woolen threads) that were an essential part of the Vestal costume were supplied by the rex sacrorum and flamen dialis. Once a year, the Vestals gave the rex sacrorum a ritualised warning to be vigilant in his duties, using the phrase "Vigilasne rex, vigila!" In Cicero's opinion, the Vestals ensured that Rome kept its contact with the gods.
A peculiar duty of the Vestals was the preparation and conservation of the sacred salamoia muries used for the savouring of the mola salsa, a salted flour mixture to be sprinkled on sacrificial victims (hence the Latin verb immolare, "to put on the mola, to sacrifice"). This dough too was prepared by them on fixed days. Theirs also the task of preparing the suffimen for the Parilia.
Festivals
Domestic and family life in general were represented by the festival of the goddess of the house and of the spirits of the storechamber – Vesta and the Penates – on Vestalia (7 – 15 June). On the first day of festivities the penus Vestae (sanctum sanctorum of her temple which was usually curtained off) was opened, for the only time during the year, at which women offered sacrifices. As long as the curtain remained open, mothers could come, barefoot and disheveled, to leave offerings to the goddess in exchange for a blessing to them and their family. The animal consecrated to Vesta, the donkey, was crowned with garlands of flowers and bits of bread on 9 June. The final day (15 June) was ["when dung may be removed lawfully"] – the penus Vestae was solemnly closed; the Flaminica Dialis observed mourning, and the temple was subjected to a purification called stercoratio: the filth was swept from the temple and carried next by the route called clivus Capitolinus and then into the Tiber.
In the military Feriale Duranum (AD 224) the first day of Vestalia is Vesta and the last day is Vesta cluditur. This year records a supplicatio dedicated to Vesta for 9 June, and records of the Arval Brethren on this day observe a blood sacrifice to her as well. Found in the Codex-Calendar of 354, 13 February had become the holiday Virgo Vestalis parentat, a public holiday which by then had replaced the older parentalia where the sacrifice of cattle over flames is now dedicated to Vesta. This also marks the first participation of the Vestal Virgins in rites associated with the Manes.
Mythography
Vesta had no official mythology, and she existed as an abstract goddess of the hearth and of chastity. Only in the account of Ovid at Cybele's party does Vesta appear directly in a myth.
Birth of Romulus and Remus
Plutarch, in his Life of Romulus, told a variation of Romulus' birth citing a compilation of Italian history by a Promathion. In this version, while Tarchetius was king of Alba Longa, a phantom phallus appeared in his hearth. The king visited an oracle of Tethys in Etrusca, who told him that a virgin must have intercourse with this phallus. Tarchetius instructed one of his daughters to do so, but she refused sending a handmaiden in her place. Angered, the king contemplated her execution; however, Vesta appeared to him in his sleep and forbade it. When the handmaid gave birth to twins by the phantom, Tarchetius handed them over to his subordinate, Teratius, with orders to destroy them. Teratius instead carried them to the shore of the river Tiber and laid them there. Then a she-wolf came to them and breastfed them, birds brought them food and fed them, before an amazed cow-herder came and took the children home with him. Thus they were saved, and when they were grown up, they set upon Tarchetius and overcame him. Plutarch concludes with a contrast between Promathion's version of Romulus' birth and that of the more credible Fabius Pictor which he describes in a detailed narrative and lends support to.
Conception of Servius Tullius
Dionysius of Halicarnassus recounts a local story regarding the birth of king Servius Tullius. In it, a phallus rose from the hearth of Vesta in Numa's palace, and Ocresia was the first to see it. She immediately informed the king and queen. King Tarquinius, upon hearing this, was astonished; but Tanaquil, whose knowledge of divination was well-known, told him it was a blessing that a birth by the hearth's phallus and a mortal woman would produce superior offspring. The king then chose Ocresia to have intercourse with it, for she had seen it first. During which either Vulcan, or the tutelary deity of the house, appeared to her. After disappearing, she conceived and delivered Tullius. This story of his birth could be based on his name as Servius would euphemistically mean "son of servant", because his mother was a handmaiden.
Impropriety of Priapus
In book 6 of Ovid's Fasti: Cybele invited all the gods, satyrs, rural divinities, and nymphs to a feast, though Silenus came uninvited with his donkey. At it, Vesta lay at rest, and Priapus spotted her. He decided to approach her in order to violate her; however, the ass brought by Silenus let out a timely bray: Vesta was woken and Priapus barely escaped the outraged gods. Mentioned in book 1 of the Fasti is a similar instance of Priapus' impropriety involving Lotis and Priapus. The Vesta-Priapus account is not as well developed as that involving Lotis, and critics suggest the account of Vesta and Priapus only exists to create a cult drama. Ovid says the donkey was adorned with necklaces of bread-bits in memory of the event. Elsewhere, he says donkeys were honored on 9 June during the Vestalia in thanks for the services they provided in the bakeries.
Vesta outside Rome
Vesta's cult is attested at Bovillae, Lavinium and Tibur. At Bovillae were located the Alban Vestals (Albanae Longanae Bovillenses), supposed to be continuing the Alban Vestals. Lavinium had the Vestals of the Laurentes Lavinates. The two orders were rooted in the most ancient tradition predating Rome. Tibur too had his own vestals who are attested epigraphically.
Vestals might have been present in the sanctuary of Diana Nemorensis near Aricia.
See also
Clerical celibacy
House of the Vestals
Temple of Vesta, Tivoli
4 Vesta, one of the largest objects in the asteroid belt
Citations
Sources
Ancient
Gaius Valerius Catullus in Carmina Marcus Tullius Cicero in Pro Fonteio Dionysius of Halicarnassus in Romaike Archaiologia Gaius Acilius in Annales Aciliani Aulus Gellius in Noctes Atticae Maurus Servius Honoratus in In Vergilii Aeneidem commentarii Maurus Servius Honoratus in Eclogues Publius Ovidius Naso in Amores Publius Ovidius Naso in Fasti Gaius Petronius Arbiter in Satyricon Titus Maccius Plautus in Casina Gaius Plinius Secundus in Naturalis Historia Lucius Mestrius Plutarchus in Life of Numa Lucius Mestrius Plutarchus in Life of Romulus''
Modern
External links
Vesta at Encyclopædia Britannica.
Dii Familiaris
Domestic and hearth deities
Fire goddesses
Hestia
Roman goddesses
Virgin goddesses |
25507191 | https://en.wikipedia.org/wiki/Xcast | Xcast | The explicit multi-unicast (Xcast) is a variation of multicast that supports a great number of multicast sessions with a small number of recipients in each. It adds all the destination IP addresses in the IP header, instead of using a multicast address. The traditional multicast schemes over Internet Protocol (IP) scale to multicast groups with many members, but they have scalability problems for a great number of groups. Multicast schemes can be used to minimize the bandwidth consumption. Xcast minimizes bandwidth consumption for small groups, by eliminating the signaling protocols and state information for every session of the standard IP multicast scheme.
Description
In Xcast, the source node keeps all destinations of the multicast channel through which packets will be sent. The source encodes the destinations list in the Xcast header and sends the packet to a router. Each router looks in a routing table to determine the next hop of each packet, analyzes its header, parses the destination field basing on the following jump of every destination and copies the packets as many different paths as they need to follow. After that, the router copies the packet with its correct Xcast header to every following jump. On the last hop, there is no need to make a new copy, since there is just one address in the destination field. The packet is treated just like a unicast packet, which is called Xcast to Unicast (X2U).
The IP multicast standard was designed to scale to multicast groups with many members. It works well when doing a distribution similar to broadcasting, but it has scalability problems to a large number of groups. Multicast routing protocols keep routing tables that record multicast group addresses with members. These tables might become large, that prompted alternative schemes to reduce the quantity of state information. IP Multicast protocols announce a source or maintain routes between routers. The cost of these protocols can be significant even then the size of each group is reduced.
Xcast follows philosophy that worked well to grow the Internet: keep the center of the network simple, and do the complicated operations on the sides.
An open source implementation was available from IBM starting in 2001.
A MediaWiki-based web site (English language, but registered in Japan) indicates activity from 2004 through 2007.
An informational specification was published by the Internet Engineering Task Force in November 2007 as RFC 5058.
Advantages
Routers do not need to keep information for every session or channel. This makes Xcast very scalable about the number of sessions it can support.
There is no need to make a direction assignment.
They don't need protocols for multicast routing. They are routed correctly thanks to the common unicast protocols.
There is no critical node. Xcast minimizes the network latencies and maximizes efficiency.
Symmetric paths are not required.
With traditional IP multicast routing protocols it is necessary to establish a communication between unicast and multicast routing protocols. That means a slow error recovery. Xcast reacts immediately to unicast routing changes.
Easier security and register. With Xcast all sources know the channel members and all routers are able to know the number of times each packet has been duplicated in its domain.
The receptors can be heterogeneous since Xcast allows that every receptor is able to have its own requirements of service in each multicast channel.
Simplicity when implementing reliable protocols over Xcast.
Flexibility: unicast, multicast and Xcast represent costs of bandwidth, signalization and processing respectively. Depending on how the network is built or its load at certain moment, it may be better to use one system or another. Xcast is just another alternative.
Disadvantages
Each packet contains all the remaining destinations, which increases its header size.
It requires more complex header processing. Every processing step looks into the routing table, so it is consulted the same number times as a unicast to each destination. A new header must be generated after every hop.
But on the other hand:
Xcast is designed for sessions with few users in each, so in many routers the headers will only have just one address.
The header building can become a very easy operation, overwrite a bit map.
When the packet reaches a region where the bandwidth is not limited, the packet can become a premature X2U.
Applications
Xcast allows efficient applications such as VoIP, video conferencing, or collaborative meetings.
These applications could be done using just unicast, but in cases with limited bandwidth, the Xcast efficiency might be useful.
On the other hand, since Xcast does not scale to groups with many members, it can not substitute for all other multicast models.
See also
Unicast
Multicast
Broadcast
References
Internet architecture
Network protocols |
2205688 | https://en.wikipedia.org/wiki/Pedasus | Pedasus | Pedasus (Ancient Greek: Πήδασος) has been identified with several personal and place names in Greek history and mythology.
Persons
In Homer's Iliad, Pedasus was the name of a Trojan warrior, and the son of the naiad Abarbarea and human Bucolion. His twin brother was Aesepus; both were slain by Euryalus, the son of Mecisteus, during the Trojan War.
In Homer's Iliad, Pedasus was also the name of a swift horse taken as booty by Achilles when he killed Eetion. This horse was killed by a spear during a duel between Patroclus and Sarpedon.
Places
Pedasus (Caria): In Caria, according to Herodotus, the Battle of Pedasus (Summer of 496 BCE) was a night ambush where the Carians annihilated a Persian army. This engagement occurred during the Ionian Revolt (499-494 BCE).
Pedasus (Messenia): In Peloponnese, Methone has been identified with the vine-covered Pedasus, one of the seven cities offered by Agamemnon to Achilles to quell his rage and to persuade him to return to the Siege of Troy.
Pedasus (Mysia): In the Troad, there was another Pedasus on the Satnioeis river, said to be inhabited by a tribe called the Leleges. During the Trojan War, this Pedasus was ruled over by a certain king named Altes, who was killed by Agamemnon. This city was sacked by Achilles.
Notes
References
Herodotus, The Histories with an English translation by A. D. Godley. Cambridge. Harvard University Press. 1920. . Online version at the Topos Text Project. Greek text available at Perseus Digital Library.
Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library.
Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library.
Trojans
Characters in the Iliad
Characters in Greek mythology |
45482253 | https://en.wikipedia.org/wiki/Parliament%20Security%20Services | Parliament Security Services | Repercussion of bomb throwing incident in Lok Sabha Chamber, the then Central Legislative Assembly on 08th April, 1929, The then President of Central Legislative Assembly, Shri Vithalbhai Patel, (24 August 1925 – April 1930) set up a ‘WATCH AND WARD COMMITTEE’ on 03rd September 1929.
The Honourable Sir James Crerar, Chairman of the Committee recommended the establishment of ‘Door – Keeper & Messengers’. Initially 21 men were nominated for access control in the complex. The Sergeant at Arms (COLIN KEPPEL) was designated as Controller and another 25 officials drawn from the Delhi Police, the then Metropolitan Police, for deployment in Gallery and a Watch & Ward Officer was appointed to administer the directions of the Hon’ble Speaker under the guidance of the Secretary General. The Watch & Ward has been renamed as Parliament Security Service since15 April 2009.
The Parliament Security Service headed by Joint Secretary (Security), looks after the security set up in the Indian Parliament House complex. Director (Security) of the Rajya Sabha Secretariat exercises security operational control over the Parliament Security Service in the Rajya Sabha sector under the administrative control of the Rajya Sabha Secretariat. Director (Security) of the Lok Sabha Secretariat exercises security operational control over the Parliament Security Service in the Lok Sabha sector under the administrative control of the Lok Sabha Secretariat. Parliament Security Service is the In-House system to provides proactive, preventive and protective security to the VIPs/VVIPs, building and its incumbents. Parliament Security Services is solely responsible for management of access control and regulation of people, material and vehicles within the historical and prestigious Parliament House Complex.
Being the In-House security service its prime approach revolves around the principles of Access Control, based on proper authorization, verification, identification and authentication of human and material resources entering the Parliament House Complex with the help of modern security gadgets. Since the threat perception has been increasing over the years due to manifold growth of various terrorist organizations/ outfits, refinement in their planning, intelligence, actions and surrogated war-fare tactics employed by organizations sponsoring and nourishing terrorists, new security procedures have been introduced into the security management to counter the ever-changing modus operandi of terrorist outfits/individuals posing threat to the Parliament House Complex and its VIPs.
The Parliament Security Services is the nodal security organization responsible for security of Parliament House Complex and the objective is ensured by coordinating with various other security agencies.
Other security agencies viz. Delhi Police, Parliament Duty Group/Central Reserve Police Force,Delhi Fire Service, Intelligence Bureau, SPG and NSG assist Parliament Security Service.
Having expertise in identification of Hon'ble Members Of Parliament, the departments/institutions call on PSS Officers for assistance to identify during VVIP functions and therefore Parliament Security Service also assist President House during oath ceremony & At-Home functions, Army & Delhi Police during Republic Day (26 January) function on Rajpath and on Independence Day (15 August) functions organized at Red Fort every year.
The Parliament Security Service plays an important operational activity during the Presidential election. It also coordinates between the Bureau of Civil Aviation Security, the Delhi Police, Airport Security for the collection of the ballot boxes, containing ballot papers of respective state legislature, from the Airport, its safe transportation under armed guards from Airport to Parliament House where it is placed in safe custody under the aegis of the Returning Officer under lock and key under the protection of round the clock armed guards After the completion of the counting and declaration of the result, the ballot boxes are duly returned to the Election Commission.
One of the important operational activities of the Parliament Security Service is the show around of the Parliament House Complex to the visitors coming to see the Parliament House during inter-session period. The Sub Officers of the Parliament Security Service are deputed to ensure that the visitors, foreign dignitaries and the delegations are escorted properly and given factual and detailed information about the history of the Parliament, its building and the procedures practiced for conducting the proceedings of the Parliament. For the students, it is designed more or less on the pattern of an educational tour. Visitors are also given a brief about the statues and portraits installed in the complex.
References
Parliament of India |
1856144 | https://en.wikipedia.org/wiki/Z/Architecture | Z/Architecture | z/Architecture, initially and briefly called ESA Modal Extensions (ESAME), is IBM's 64-bit complex instruction set computer (CISC) instruction set architecture, implemented by its mainframe computers. IBM introduced its first z/Architecture-based system, the z900, in late 2000. Later z/Architecture systems include the IBM z800, z990, z890, System z9, System z10, zEnterprise 196, zEnterprise 114, zEC12, zBC12, z13, z14 and z15.
z/Architecture retains backward compatibility with previous 32-bit-data/31-bit-addressing architecture ESA/390 and its predecessors all the way back to the 32-bit-data/24-bit-addressing System/360. The IBM z13 is the last z Systems server to support running an operating system in ESA/390 architecture mode. However, all 24-bit and 31-bit problem-state application programs originally written to run on the ESA/390 architecture will be unaffected by this change.
Each z/OS address space, called a 64-bit address space, is 16 exabytes in size.
Code (or mixed) spaces
Most operating systems for the z/Architecture, including z/OS, generally restrict code execution to the first 2 GB (31 address bits, or 231 addressable bytes) of each virtual address space for reasons of efficiency and compatibility rather than because of architectural limits. The z/OS implementation of the Java programming language is an exception. The z/OS virtual memory implementation supports multiple 2 GB address spaces, permitting more than 2 GB of concurrently resident program code. The 64-bit version of Linux on IBM Z allows code to execute within 64-bit address ranges.
Data-only spaces
For programmers who need to store large amounts of data, the 64-bit address space usually suffices.
Dataspaces and hiperspaces
Applications that need more than a 16 exabyte data address space can employ extended addressability techniques, using additional address spaces or data-only spaces. The data-only spaces that are available for user programs are called:
dataspaces (sometimes referred to as "data spaces") and
hiperspaces (High performance space).
These spaces are similar in that both are areas of virtual storage that a program can create, and can be up to 2 gigabytes. Unlike an address space, a dataspace or hiperspace contains only user data; it does not contain system control blocks or common areas. Program code cannot run in a dataspace or a hiperspace.
A dataspace differs from a hiperspace in that dataspaces are byte-addressable, whereas hiperspaces are page-addressable.
IBM mainframe expanded storage
Traditionally IBM Mainframe memory has been byte-addressable. This kind of memory is termed "Central Storage". IBM Mainframe processors through much of the 1980s and 1990s supported another kind of memory: Expanded Storage.
Expanded Storage is 4KB-page addressable. When an application wants to access data in Expanded Storage it must first be moved into Central Storage. Similarly, data movement from Central Storage to Expanded Storage is done in multiples of 4KB pages. Initially page movement was performed using relatively expensive instructions, by paging subsystem code.
The overhead of moving single and groups of pages between Central and Expanded Storage was reduced with the introduction
of the MVPG (Move Page) instruction and the ADMF (Asynchronous Data Mover Facility) capability.
The MVPG instruction and ADMF are explicitly invoked—generally by middleware in z/OS or z/VM (and ACP?)—to access data in expanded storage. Some uses are namely:
MVPG is used by VSAM Local Shared Resources (LSR) buffer pool management to access buffers in a hiperspace in Expanded Storage.
Both MVPG and ADMF are used by DB2 to access hiperpools. Hiperpools are portions of a buffer pool located in a hiperspace.
VM Minidisk Caching.
Until the mid-1990s Central and Expanded Storage were physically different areas of memory on the processor. Since the mid-1990s Central and Expanded Storage were merely assignment choices for the underlying processor memory.
These choices were made based on specific expected uses:
For example, Expanded Storage is required for the Hiperbatch function (which uses the MVPG instruction to access its hiperspaces).
In addition to the hiperspace and paging cases mentioned above there are other uses of expanded storage, including:
Virtual I/O (VIO) to Expanded Storage which stored temporary data sets in simulated devices in Expanded Storage. (This function has been replaced by VIO in Central Storage.)
VM Minidisk Caching.
z/OS removed the support for Expanded Storage. All memory in z/OS is now Central Storage. z/VM 6.4 fulfills Statement of Direction to drop support for all use of Expanded Storage.
MVPG and ADMF
MVPG
IBM described MVPG as "moves a single page and the central processor cannot execute any other instructions until the page move is completed."
The MVPG mainframe instruction (MoVe PaGe, opcode X'B254') has been compared to the MVCL (MoVe Character Long) instruction, both of which can move more than 256 bytes within main memory using a single instruction. These instructions do not comply with definitions for atomicity, although they can be used as a single instruction within documented timing and non-overlap restrictions.
The need to move more than 256 bytes within main memory had historically been addressed with software (MVC loops), MVCL, which was introduced with the 1970 announcement of the System/370, and MVPG, patented and announced by IBM in 1989, each have advantages.
ADMF
ADMF (Asynchronous Data Mover Facility), which was introduced in 1992, goes beyond the capabilities of the MVPG (Move Page) instruction, which is limited to a single page, and can move groups of pages between Central and Expanded Storage.
A macro instruction named IOSADMF, which has been described as an API that avoids "direct, low-level use of ADMF," can be used to read or write data to or from a hiperspace. Hiperspaces are created using DSPSERV CREATE.
To provide reentrancy, IOSADMF is used together with a "List form" and "Execute form."
z/Architecture operating systems
The z/VSE Version 4, z/TPF Version 1 and z/VM Version 5 operating systems, and presumably their successors, require z/Architecture.
z/Architecture supports running multiple concurrent operating systems and applications even if they use different address sizes. This allows software developers to choose the address size that is most advantageous for their applications and data structures.
Platform Solutions Inc. (PSI) previously marketed Itanium-based servers which were compatible with z/Architecture. IBM bought PSI in July 2008, and the PSI systems are no longer available. FLEX-ES, zPDT and the Hercules emulator also implement z/Architecture. Hitachi mainframes running newer releases of the VOS3 operating system implement ESA/390 plus Hitachi-unique CPU instructions, including a few 64-bit instructions. While Hitachi was likely inspired by z/Architecture, and formally collaborated with IBM on the z900-G2/z800 CPUs introduced in 2002, Hitachi's machines are not z/Architecture-compatible.
On July 7, 2009, IBM on occasion of announcing a new version of one of its operating systems implicitly stated that Architecture Level Set 4 (ALS 4) exists, and is implemented on the System z10 and subsequent machines. The ALS 4 is also specified in LOADxx as ARCHLVL 3, whereas the earlier z900, z800, z990, z890, System z9 specified ARCHLVL 2. Earlier announcements of System z10 simply specified that it implements z/Architecture with some additions: 50+ new machine instructions, 1 MB page frames, and hardware decimal floating point unit (HDFU).
Notes
References
Further reading
Preshing on Programming - Atomic vs. Non-Atomic Operations
Principles of Computer Design - Atomicity
IBM mainframe technology
Instruction set architectures
Computer-related introductions in 2000
mainframe expanded storage
64-bit computers |
309457 | https://en.wikipedia.org/wiki/Barcode%20reader | Barcode reader | A barcode reader (or barcode scanner) is an optical scanner that can read printed barcodes, decode the data contained in the barcode and send the data to a computer. Like a flatbed scanner, it consists of a light source, a lens and a light sensor translating for optical impulses into electrical signals. Additionally, nearly all barcode readers contain decoder circuitry that can analyze the barcode's image data provided by the sensor and sending the barcode's content to the scanner's output port.
Types of barcode scanners
Technology
Barcode readers can be differentiated by technologies as follows:
Pen-type readers
Pen-type readers consist of a light source and photodiode that are placed next to each other in the tip of a pen. To read a barcode, the person holding the pen must move the tip of it across the bars at a relatively uniform speed. The photodiode measures the intensity of the light reflected back from the light source as the tip crosses each bar and space in the printed code. The photodiode generates a waveform that is used to measure the widths of the bars and spaces in the barcode. Dark bars in the barcode absorb light and white spaces reflect light so that the voltage waveform generated by the photodiode is a representation of the bar and space pattern in the barcode. This waveform is decoded by the scanner in a manner similar to the way Morse code dots and dashes are decoded.
Laser scanners
Laser scanners work the same way as pen-type readers except that they use a laser beam as the light source and typically employ either a reciprocating mirror or a rotating prism to scan the laser beam back and forth across the barcode. As with the pen-type reader, a photo-diode is used to measure the intensity of the light reflected back from the barcode. In both pen readers and laser scanners, the light emitted by the reader is rapidly varied in brightness with a data pattern and the photo-diode receive circuitry is designed to detect only signals with the same modulated pattern.
CCD readers (also known as LED scanners)
Charge-coupled device (CCD) readers use an array of hundreds of tiny light sensors lined up in a row in the head of the reader. Each sensor measures the intensity of the light immediately in front of it. Each individual light sensor in the CCD reader is extremely small and because there are hundreds of sensors lined up in a row, a voltage pattern identical to the pattern in a barcode is generated in the reader by sequentially measuring the voltages across each sensor in the row. The important difference between a CCD reader and a pen or laser scanner is that the CCD reader is measuring emitted ambient light from the barcode whereas pen or laser scanners are measuring reflected light of a specific frequency originating from the scanner itself. LED scanners can also be made using CMOS sensors, and are replacing earlier Laser-based readers.
Camera-based readers
Two-dimensional imaging scanners are a newer type of barcode reader. They use a camera and image processing techniques to decode the barcode.
Video camera readers use small video cameras with the same CCD technology as in a CCD barcode reader except that instead of having a single row of sensors, a video camera has hundreds of rows of sensors arranged in a two dimensional array so that they can generate an image.
Large field-of-view readers use high resolution industrial cameras to capture multiple bar codes simultaneously. All the bar codes appearing in the photo are decoded instantly (ImageID patents and code creation tools) or by use of plugins (e.g. the Barcodepedia used a flash application and some web cam for querying a database), have been realized options for resolving the given tasks.
Omnidirectional barcode scanners
Omnidirectional scanning uses "series of straight or curved scanning lines of varying directions in the form of a starburst, a Lissajous curve, or other multiangle arrangement are projected at the symbol and one or more of them will be able to cross all of the symbol's bars and spaces, no matter what the orientation. Almost all of them use a laser. Unlike the simpler single-line laser scanners, they produce a pattern of beams in varying orientations allowing them to read barcodes presented to it at different angles. Most of them use a single rotating polygonal mirror and an arrangement of several fixed mirrors to generate their complex scan patterns.
Omnidirectional scanners are most familiar through the horizontal scanners in supermarkets, where packages are slid over a glass or sapphire window. There are a range of different omnidirectional units available which can be used for differing scanning applications, ranging from retail type applications with the barcodes read only a few centimetres away from the scanner to industrial conveyor scanning where the unit can be a couple of metres away or more from the code. Omnidirectional scanners are also better at reading poorly printed,wrinkled,or even torn barcodes.
Cell phone cameras
While cell phone cameras without auto-focus are not ideal for reading some common barcode formats, there are 2D barcodes which are optimized for cell phones, as well as QR Codes (Quick Response) codes and Data Matrix codes which can be read quickly and accurately with or without auto-focus.
Cell phone cameras open up a number of applications for consumers. For example:
Movies: DVD/VHS movie catalogs.
Music: CD catalogs – playing an MP3 when scanned.
Book catalogs and device.
Groceries, nutrition information, making shopping lists when the last of an item is used, etc.
Personal Property inventory (for insurance and other purposes) code scanned into personal finance software when entering. Later, scanned receipt images can then be automatically associated with the appropriate entries. Later, the barcodes can be used to rapidly weed out paper copies not required to be retained for tax or asset inventory purposes.
If retailers put barcodes on receipts that allowed downloading an electronic copy or encoded the entire receipt in a 2D barcode, consumers could easily import data into personal finance, property inventory, and grocery management software. Receipts scanned on a scanner could be automatically identified and associated with the appropriate entries in finance and property inventory software.
Consumer tracking from the retailer perspective (for example, loyalty card programs that track consumers purchases at the point of sale by having them scan a QR code).
A number of enterprise applications using cell phones are appearing:
Access control (for example, ticket validation at venues), inventory reporting (for example, tracking deliveries), asset tracking (for example, anti-counterfeiting).
Smartphones
Smartphones can be used in Google's mobile Android operating system via both their own Google Goggles application. Nokia's Symbian operating system features a barcode scanner which can scan barcodes, while mbarcode is a barcode reader for the Maemo operating system. In the Apple's iOS, a barcode reader is natively supported within the camera app. With BlackBerry devices, the App World application can natively scan barcodes. Windows Phone 8 is able to scan barcodes through the Bing search app.
Housing
Barcode readers can be distinguished based on housing design as follows:
Handheld scanner with a handle and typically a trigger button for switching on the light like this are used in factory and farm automation for quality management and shipping.
PDA scanner (or Auto-ID PDA) a PDA with a built-in barcode reader or attached barcode scanner.
Automatic reader a back office equipment to read barcoded documents at high speed (50,000/hour).
Cordless scanner (or Wireless scanner) a cordless barcode scanner is operated by a battery fit inside it and is not connected to the electricity mains and transfer data to the connected device like PC.
Barcode library
Main article: Barcode library(or Barcode SDK)
Currently any camera equipped device or device which has document scanner can be used as Barcode reader with special software libraries, Barcode libraries. This allows them to add barcode features to desktop, web, mobile or embedded applications. In this way, combination of barcode technology and barcode library allows to implement with low cost any automatic document processing OMR, package tracking application or even augmented reality application.
Methods of connection
Early serial interfaces
Early barcode scanners, of all formats, almost universally used the then-common RS-232 serial interface. This was an electrically simple means of connection and the software to access it is also relatively simple, although needing to be written for specific computers and their serial ports.
Proprietary interfaces
There are a few other less common interfaces. These were used in large EPOS systems with dedicated hardware, rather than attaching to existing commodity computers. In some of these interfaces, the scanning device returned a "raw" signal proportional to the intensities seen while scanning the barcode. This was then decoded by the host device. In some cases the scanning device would convert the symbology of the barcode to one that could be recognized by the host device, such as Code 39.
Keyboard wedge (USB, PS/2, etc)
As the PC with its various standard interfaces evolved, it became ever easier to connect physical hardware to it. Also, there were commercial incentives to reduce the complexity of the associated software. The early "Keyboard wedge" hardware plugged in between the PS/2 port and the keyboard, with characters from the barcode scanner appearing exactly as if they had been typed at the keyboard. Today the term is used more broadly for any device which can be plugged in and contribute to the stream of data coming "from the keyboard". Keyboard wedges plugging in via the USB interface are readily available.
The "keyboard wedge" approach makes adding things such as barcode readers to systems simple. The software may well need no changes.
The concurrent presence of two "keyboards" does require some care on the part of the user. Also, barcodes often offer only a subset of the characters offered by a normal keyboard.
USB
Subsequent to the PS/2 era, barcode readers began to use USB ports rather than the keyboard port, this being more convenient. To retain the easy integration with existing programs, it was sometimes necessary to load a device driver called a "software wedge", which facilitated the keyboard-impersonating behavior of the old "keyboard wedge" hardware.
Today, USB barcode readers are "plug and play", at least in Windows systems. Any necessary drivers are loaded when the device is plugged in.
In many cases, a choice of USB interface types (HID, CDC) are provided. Some have PoweredUSB.
Wireless networking
Some modern handheld barcode readers can be operated in wireless networks according to IEEE 802.11g (WLAN) or IEEE 802.15.1 (Bluetooth). Some barcode readers also support radio frequencies viz. 433 MHz or 910 MHz. Readers without external power sources require their batteries be recharged occasionally, which may make them unsuitable for some uses.
Resolution
The scanner resolution is measured by the size of the dot of light emitted by the reader. If this dot of light is wider than any bar or space in the bar code, then it will overlap two elements (two spaces or two bars) and it may produce wrong output. On the other hand, if a too small dot of light is used, then it can misinterpret any spot on the bar code making the final output wrong.
The most commonly used dimension is 13 mil (0.013 in or 0.33 mm), although some scanners can read codes with dimensions as small as 3 mil (0.003 in or 0.075 mm). Smaller bar codes must be printed at high resolution to be read accurately.
See also
Barcode for more details about barcode technology. Includes links to the technical details
Barcode Battler, a portable game console which scans barcodes as part of the gameplay
Barcode library, a software library that can be used to add barcode features to desktop, web, mobile or embedded applications.
CueCat, a cat-shaped handheld barcode reader. (Curiosity from the history of the PC)
References
External links
American inventions
Automatic identification and data capture
Embedded systems
Reader
Packaging machinery
Image scanners
Records management technology |
24995036 | https://en.wikipedia.org/wiki/AVG%20PC%20TuneUp | AVG PC TuneUp | AVG TuneUp, previously called AVG PC Tuneup, and TuneUp Utilities, is a utility software suite for Microsoft Windows designed to help manage, maintain, optimize, configure, and troubleshoot a computer system. It was produced and developed by TuneUp Software GmbH. TuneUp Software was headquartered in Darmstadt, Germany, and co-founded by Tibor Schiemann and Christoph Laumann in 1997. In 2011, AVG Technologies acquired TuneUp Software. AVG was then acquired by Avast in 2016.
As of 2018, eighteen major versions of TuneUp Utilities have been released. TuneUp Utilities has attained generally positive reviews, although multiple reviewers did not approve of its price for value.
Features
AVG PC TuneUp has features for PC maintenance, optimization, updates, to free up hard-drive space, and to uninstall unwanted applications. The "Automatic Maintenance" tool removes tracking cookies, cache files, old files from removed applications, and fixes issues with the Windows registry. PC TuneUp's "Sleep Mode" puts background processes to sleep until needed to reduce their burden on the computer's resources. PC TuneUp also has an uninstaller to remove unwanted programs like bloatware and a software updater that installs the most recent patches or updates. The Disk Cleaner and Browser Cleaner tools remove installer files, temporary system files, browser caches, and other files.
Development
The first version of the software, TuneUp 97, was released in 1997. New versions have been released over the years ever since.
TuneUp Utilities 2003 The first version is available in English, French, and German. It consists of 16 individual tools accessible through the Start Center, as well as the Windows start menu. It includes features to clean the hard disks, clean and defragment the Windows Registry, optimize Windows and Internet connection settings and change the look and feel of Windows. It also provides features targeted at users with an intermediate or advanced level of computer knowledge that enables them to edit the registry, manage running processes, uninstall programs, shred and undelete files, and display system information. In addition to the previously-supported Windows 95 and Windows 98, TuneUp Utilities 2003 also supports Windows 2000, Me and Windows XP.
TuneUp Utilities 2004 The 2004 release introduced TuneUp 1-Click Maintenance and TuneUp WinStyler (the predecessor TuneUp Styler). It also includes registry defragmentation support for Windows 2000 and XP.
TuneUp Utilities 2006 In TuneUp Utilities 2006, optimization, customization, and disk cleaning tools that support Mozilla Firefox were added. A feature was added where the TuneUp StartUp Manager displays editorial rating and explanations about well-known programs that start during computer startup. TuneUp Styler, in this version, is able to change the boot logo of Windows XP.
TuneUp Utilities 2007 TuneUp Utilities 2007 featured two new components: TuneUp Disk Doctor and TuneUp Disk Space Explorer. TuneUp Utilities 2007 also supports Windows Vista.
TuneUp Utilities 2008 The 2008 version incorporated two more components: TuneUp Drive Defrag and TuneUp Repair Wizard.
TuneUp Utilities 2009 In the 2009 version, Start Center added a new section that analyzes the system and then displays the current status as well as available recommendations (if any) in three areas: System maintenance, Speed and System status. This version introduced TuneUp Speed Optimizer (renamed StartUp Optimizer in subsequent versions) and TuneUp Shortcut Cleaner. The TuneUp Styler added in this version can change the Windows Vista logo animation displayed during startup.
TuneUp Utilities 2010 TuneUp Utilities 2010 added compatibility with Windows 7. A new Turbo Mode introduced in this version allows the user to disable multiple background functions of Windows and programs with one click, like Windows Aero, Windows Search, Windows Error Reporting or synchronization with mobile devices. This version also introduced TuneUp Live Optimization.
TuneUp Utilities 2011 In 2011 TuneUp Program Deactivator was added. Deactivator can disable programs that impose significant system load, thereby eliminating the load without uninstalling the programs. If the user tries to start a disabled program again, TuneUp Program Deactivator automatically re-enables the program on the fly. A new program rating functionality in this version shows how other TuneUp Utilities users have rated the usefulness of a given program on a scale of 1 to 5 stars. The Start Center also includes a Tuning Status, which tracks and displays optimization progress and highlights areas with remaining optimization potential.
TuneUp Utilities 2012 The 2012 version introduced a new Economy Mode that, when enabled, helps save battery power.
TuneUp Utilities 2013 In 2013, the software was improved in the area of disk cleanup and performance optimization via the Program Deactivator and the Live Optimization. Windows 8 support was added.
TuneUp Utilities 2014 The 2014 version of TuneUp Utilities introduced the Duplicate Finder, Windows 8.1 App Cleaner, and Flight Mode. The User Interface, Disc Cleaner and Automatic Cleaning Updates were also improved.
AVG PC TuneUp 2015 With the 2015 version, TuneUp Utilities was merged with the almost identical AVG PC TuneUp.
AVG PC TuneUp 2017 The 2017 version improved the licensing, so the software could be installed on an unlimited number of computers in the same household. It also introduced an automatic software updater that checks for newer versions of software installed on the computer.
AVG PC TuneUp 2018 The 2018 version added an improved user interface and new uninstaller tools.
AVG PC TuneUp 2019 The 2019 version added new statistics and a Disk Doctor feature to scan and fix hard drive errors on users PCs.
Critical reception
TuneUp Utilities received generally positive reviews.
Computer Shopper magazine reviewed TuneUp Utilities 2009 and gave it a score of 8 out of 10. It commended TuneUp Registry Cleaner as well as the hard-drive-related components of the product. However, it also noted that some tools are superficially implemented. The software lacks an antivirus and personal firewall. TuneUp Utilities 2009 was voted No. 37 of "The Top 100 Products of 2009" by Computer Shopper readers and was named "Best Utility Suite" by the editors.
CNET reviewed TuneUp Utilities 2009 and gave it 5 stars out of 5. "To call TuneUp Utilities 2009 useful would drastically understate the situation", said Seth Rosenblatt, an associate editor with CNET. He said TuneUp Utilities was a powerful and easy-to-use set of tools, with its disk cleanup and registry cleaner being the "bread-and-butter" of the suite.
PC World's Preston Gralla reviewed the 2010 version and commented that TuneUp Utilities is a comprehensive suite that "includes everything from a startup optimizer to a defragmenter, from an overall speed optimizer to a Windows Registry cleaner, and more". However, he said that the high price of the entire suite ($50) might make a purchase decision more difficult. Preston had also previously reviewed TuneUp Utilities 2009 for PC Advisor and gave it 4.5 stars out of 5 stars.
PC Magazine reviewed TuneUp Utilities 2011 and gave it a score of 4 out of 5. "Overall, the software does a fine job of revitalizing a worn PC," commented Jeffrey L. Wilson, a PC Magazine software analyst. He appreciated the product's one-click repair feature and the subsequent reduction in his test PC's boot-time. However, Wilson criticized the software license that only permits installation on up to three PCs. In comparison, a competing product called Iolo System Mechanic 10, allows an unlimited number of installations in the same household.
TuneUp Utilities received a Softpedia Pick award from Softpedia. Although Softpedia editor Alex Muradin expressed concern about the lack of proper technical support for TuneUp Utilities 2006, he gave the product a final score of 5 out of 5. However, he gave this product a subscore of 3 out of 5 for pricing/value.
Author Christian Immler characterizes TuneUp Utilities as a classic amongst tuning tools. CNET reviewed TuneUp Utilities 2015 and gave it a score of 3.5 out of 5. "AVG PC TuneUp is a well-designed and effective tool that mostly accomplishes what it claims. Its advantage lies in its streamlined user flow and one-click-friendly design," said Eddie Cho, a tech editor and producer for CNET.
Notes
Only Windows XP Home Edition, Professional Edition and Media Center Edition are supported.
References
External links
1997 software
Windows-only shareware
Proprietary software
Computer system optimization software
Data erasure software
Data recovery software
Utilities for Windows
Disk usage analysis software |
61274 | https://en.wikipedia.org/wiki/Unisys | Unisys | Unisys Corporation is an American multinational information technology (IT) services and consulting company headquartered in Blue Bell, Pennsylvania. It is the legacy proprietor of the Burroughs and UNIVAC line of computers, formed when the former bought the latter.
History
Unisys was formed in 1986 through the merger of mainframe corporations Sperry and Burroughs, with Burroughs buying Sperry for $4.8 billion. The name was chosen from over 31,000 submissions in an internal competition when Christian Machen submitted the word "Unisys" which was composed of parts of the words united, information and systems.
The merger was the largest in the computer industry at the time and made Unisys the second largest computer company with annual revenue of $10.5 billion. At the time of the merger, Unisys had approximately 120,000 employees. Michael Blumenthal became CEO and Chairman after the merger and resigned in 1990 after several years of losses. James Unruh (formerly of Memorex and Honeywell) became the new CEO and Chairman after Blumenthal's departure and continued in that role until 1997, when Larry Weinbach of Arthur Andersen became the new CEO. By 1997, layoffs and divestitures had reduced world-wide employee count to approximately 30,000.
In addition to hardware, both Burroughs and Sperry had a history of working on U.S. government contracts. Unisys continues to provide hardware, software, and services to various government agencies.
Soon after the merger, the market for proprietary mainframe-class systems—the mainstream product of Unisys and its competitors such as IBM—began a long-term decline that continues, at a lesser rate, today. In response, Unisys made the strategic decision to shift into high-end servers (e.g., 32-bit processor Windows Servers), as well as information technology (IT) services such as systems integration, outsourcing, and related technical services, while holding onto the profitable revenue stream from maintaining its installed base of proprietary mainframe hardware and applications.
Important events in the company's history include the development of the 2200 series in 1986, including the UNISYS 2200/500 CMOS mainframe, and the Micro A in 1989, the first desktop mainframe, the UNISYS ES7000 servers in 2000, and the 3DVE Unisys blueprinting method of visualizing business rules and workflow in 2004.
In 1988, the company acquired Convergent Technologies, makers of CTOS.
Joseph McGrath served as CEO and President from January 2005, until September, 2008.
On October 7, 2008, J. Edward Coleman replaced J. McGrath as CEO and was named Chairman of the board as well.
On November 10, 2008, the company was removed from the S&P 500 index as the market capitalization of the company had fallen below the S&P 500 minimum of $4 billion.
In 2010, Unisys sold its Medicare processing Health Information Management service to Molina Healthcare for $135 million.
On October 6, 2014, Unisys announced that Coleman would leave the company effective December 1, 2014. Unisys' share price immediately fell when this news became public.
On January 1, 2015, Unisys officially named Peter Altabef as its new president and CEO, replacing Edward Coleman. Paul Weaver, who was formerly Lead Independent Director, was named Chairman.
In August 2020, Unisys Corporation reported that for the third straight year, NelsonHall has listed the organization as the regional market sector leader in the Evaluation & Assessment Tool (NEAT) Vendor Analysis report for Advanced Digital Workplace Services.
Products and services
Unisys offers outsourcing and managed services, systems integration and consulting services, high end server technology, cybersecurity and cloud management software, and maintenance and support services.
In line with larger trends in the information technology industry, an increasing amount of Unisys revenue comes from services rather than equipment sales; in 2014, the ratio was 86% for services, up from 65% in 1997. The company maintains a portfolio of over 1,500 U.S. and non-U.S. patents.
The company's mainframe line, Clearpath, is capable of running legacy mainframe software, in addition to the Java platform and the JBoss Java EE Application Server. The Clearpath system is available in either a UNISYS 2200-based system (Sperry) or an MCP-based system (Burroughs).
In 2014, Unisys phased out its CMOS processors, completing the migration of its ClearPath mainframes to Intel x86 chips, allowing clients to run the company's OS 2200 and MCP operating systems alongside more recent Windows and Linux workloads on Intel-based systems that support cloud and virtualization. The company announced its new ClearPath Dorado 8380 and 8390 systems in May, 2015. These new systems allowed the company to transition its ClearPath server families from proprietary complementary metal oxide semiconductor processor technology to a software-based fabric architecture running on Intel processors.
Unisys operates data centers around the world.
Clients
Unisys clients are typically large corporations and government agencies, such as the New York Clearinghouse, Dell/EMC, Lufthansa Systems, Lloyds Bank, SWIFT, state governments (e.g., for unemployment insurance, licensing), various branches of the U.S. military, the Federal Aviation Administration (FAA), numerous airports, the General Services Administration, U.S. Transportation Security Administration, Internal Revenue Service, Nextel, and Telefónica of Spain. Unisys systems are used for many industrial and government purposes, including banking, check processing, income tax processing, airline passenger reservations, biometric identification, newspaper content management, and shipping port management, as well as providing weather data services.
Projects
Additional projects include the following:
Consumerization of IT
A study sponsored by Unisys and conducted by IDC revealed the gap between the activities and expectations of the new generation of "iWorkers" and the ability of organizations to support their needs. The results showed that organizations continue to work with standardized command and control IT models of the past and are not able to profit from the widespread use of newer networked technologies.
Security index
A biannual global study that provides statistically relevant insights into the attitudes of consumers on a wide range of security related issues, including:
National security: including concerns related to terrorism and health epidemics
Financial security: regarding financial fraud and ability to meet personal financial obligations
Internet security: related to spam, virus, and online financial transactions
Personal security: concerning physical safety and identity theft
Cloud 20/20
Cloud 20/20 is an annual technical paper contest for tertiary students from India in October 2009. The contest allows students to explore the possibilities and complexities of cloud computing in areas such as automation, virtualization, application development, security, consumerization of IT and airports. The contest has drawn participation from universities across India, with over 570 institutes taking part in 2009 and more than a thousand in 2010. The contest culminates in an event where five finalists present their papers before a panel of judges that comprise academicians and technologists. Prizes include the latest technology gadgets, internship projects and career opportunities with Unisys.
Controversies
In 1987, Unisys was sued with Rockwell Shuttle Operations Company for $5.2 million by two former employees of the Unisys Corporation, one a subcontractor responsible for the computer programs for the space shuttle. The suit filed by Sylvia Robins, a former Unisys engineer, and Ria Solomon, who worked for Robins, charges that the two were forced from their jobs and harassed after complaining about safety violations and inflated costs.
Unisys overcharged the U.S. government and in 1998 was found guilty of failure to supply adequate equipment. In 1998, Unisys Corporation agreed to pay the government $2.25 million to settle allegations that it supplied refurbished, rather than new, computer materials to several federal agencies in violation of the terms of its contract. Unisys admitted to supplying re-worked or refurbished computer components to various civilian and military agencies in the early 1990s, when the contract required the company to provide new equipment. The market price for the refurbished material was less than the price for new material which the government paid.
In 1998, Unisys was found guilty of price inflation and government contract fraud, with the company settling to avoid further prosecution. Lockheed Martin and Unisys paid the government $3.15 million to settle allegations that Unisys inflated the prices of spare parts sold to the U.S. Department of Commerce for its NEXRAD Doppler Radar System, in violation of the False Claims Act, 31 U.S.C. § 3729, et seq. "[T]he settlement resolves allegations that Unisys knew that prices it paid Concurrent Computer Corporation for the spare parts were inflated when it passed on those prices to the government. Unisys had obtained price discounts from Concurrent on other items Unisys was purchasing from Concurrent at Unisys' own expense in exchange for agreeing to pay Concurrent the inflated prices". Prior to 1993, Unisys paid Senator Al D'Amato's brother, Armand P. D'Amato for access to the senator. Armand P. D'Amato was convicted for mail fraud in connection with $120,500 he received from Unisys to lobby the Senator.
Unisys attracted attention in 1994 after announcing its patent on the Lempel–Ziv–Welch (LZW) data compression algorithm, which is used in the common GIF image file format. For a more complete discussion of this issue, see Graphics Interchange Format#Unisys and LZW patent enforcement. All global patents that Unisys held relating to the standard GIF format expired as of 7 July 2004, so the Unisys LZW patent issue is no longer an encumbrance to GIF.
Unisys was the target of "Operation Ill Wind", a major corruption investigation in the mid-to-late 1980s. As part of the settlement, all Unisys employees were required to receive ethics training each year, a practice that continues today.
In 2003 and 2004, Unisys retained the influential lobbyist Jack Abramoff, paying his firm $640,000 for his services in those two years. In January 2006, Abramoff pleaded guilty to five felony counts for various crimes related to his federal lobbying activities, though none of his crimes involved work on behalf of Unisys. The lobbying activities of Abramoff and his associates were the subject of a large federal investigation.
In October 2005, the Washington Post reported that the company had allegedly overbilled on the $1-to-3-billion Transportation Security Administration contract for almost 171,000 hours of labor and overtime at up to the maximum rate of $131.13 per hour, including 24,983 hours not allowed by the contract. Unisys denied wrongdoing.
In 2006, the Washington Post reported that the FBI was investigating Unisys for alleged cybersecurity lapses under the company's contract with the United States Department of Homeland Security. A number of security lapses supposedly occurred during the contract, including incidents in which data was transmitted to Chinese servers. Unisys denies all charges and said it has documentation disproving the allegations.
In 2007, Unisys was found guilty of misrepresentation of retiree benefits. A federal judge in Pennsylvania ordered Unisys Corp. to reinstate within 60 days free lifetime retiree medical benefits to 12 former employees who were employed by a Unisys predecessor, the Burroughs Corporation. The judge ruled that Unisys "misrepresented the cost and duration of retiree medical benefits" at a time "trial plaintiffs were making retirement decisions" and while it was advising them about the benefits the company would provide during retirement.
Also in 2007, Unisys was found guilty of willful trademark infringement in Visible Systems v. Unisys (Trademark Infringement). Computer company Visible Systems prevailed over Unisys Corp. in a trademark infringement lawsuit filed in Massachusetts federal court. In November 2007, the court entered an injunction and final judgment ordering Unisys to discontinue its use of the "Visible" trademark, upholding the jury's award to Visible Systems of $250,000 in damages, and awarding an additional $17,555 in interest. Visible Systems claimed Unisys wrongfully used the name "Visible" in marketing its software and services. The jury found the infringement by Unisys was willful. Visible Systems appealed the final judgment, believing the court wrongly excluded the issues of bad faith and disgorgement of an estimated $17 billion in unjust profits from the consideration of the jury.
In 2008, Joe McGrath stepped down after a no confidence vote from the board, and was replaced by J. Edward Coleman, former CEO of Gateway Incorporated. The president of the federal sector, Greg Baroni, was also fired. Unisys announced on June 30, 2008, that the Transportation Security Administration (TSA) had not selected the company for Phase 2 of procurement for the Information Technology Infrastructure Program. In July, Unisys announced its plans to file a formal protest of the TSA decision with the Government Accountability Office (GAO). On August 20, 2008, the TSA announced it was allowing bidding from all competitors including Unisys and Northrop Grumman, who both filed formal protests with the GAO and protested TSA's decision to the Federal Aviation Administration's Office of Dispute Resolution, after not initially being selected.
In 2010, Unisys Hungary terminated the local Workers' Union representative Gabor Pinter's employment contract with immediate effect for raising concerns on the company's practice about the overtime payments and the non-respect of the health regulations in its local Shared Services Center. According to the verdict of the Labour Court of Budapest, Unisys' act was illegal and the Company must reimburse all damages of the Workers' Union representative.
In 2012, Unisys Netherlands censured computer security expert Chris Kubecka for an anti-censorship talk at the Hackers on Planet Earth Number Nine, a conference which focused on highlighting censorship with a talk titled: The Internet is for Porn. Unisys responded to the news story by quoting its policy on public talks by staff.
See also
References
External links
1986 establishments in Pennsylvania
Companies based in Montgomery County, Pennsylvania
Companies listed on the New York Stock Exchange
American companies established in 1986
Consulting firms established in 1986
Computer companies established in 1986
Information technology companies of the United States
Information technology consulting firms of the United States |
55888 | https://en.wikipedia.org/wiki/Trusted%20system | Trusted system | In the security engineering subspecialty of computer science, a trusted system is one that is relied upon to a specified extent to enforce a specified security policy. This is equivalent to saying that a trusted system is one whose failure would break a security policy (if a policy exists that the system is trusted to enforce).
The word "trust" is critical, as it does not carry the meaning that might be expected in everyday usage. A trusted system is one that the user feels safe to use, and trusts to perform tasks without secretly executing harmful or unauthorized programs; trusted computing refers to whether programs can trust the platform to be unmodified from the expected, and whether or not those programs are innocent or malicious or whether they execute tasks that are undesired by the user.
A trusted system can also be seen as a level-based security system where protection is provided and handled according to different levels. This is commonly found in the military, where information is categorized as unclassified (U), confidential (C), secret (S), top secret (TS), and beyond. These also enforce the policies of no read-up and no write-down.
Trusted systems in classified information
A subset of trusted systems ("Division B" and "Division A") implement mandatory access control (MAC) labels, and as such, it is often assumed that they can be used for processing classified information. However, this is generally untrue. There are four modes in which one can operate a multilevel secure system: multilevel, compartmented, dedicated, and system-high modes. The National Computer Security Center's "Yellow Book" specifies that B3 and A1 systems can only be used for processing a strict subset of security labels, and only when operated according to a particularly strict configuration.
Central to the concept of U.S. Department of Defense-style trusted systems is the notion of a "reference monitor", which is an entity that occupies the logical heart of the system and is responsible for all access control decisions. Ideally, the reference monitor is
tamper-proof
always invoked
small enough to be subject to independent testing, the completeness of which can be assured.
According to the U.S. National Security Agency's 1983 Trusted Computer System Evaluation Criteria (TCSEC), or "Orange Book", a set of "evaluation classes" were defined that described the features and assurances that the user could expect from a trusted system.
The dedication of significant system engineering toward minimizing the complexity (not size, as often cited) of the trusted computing base (TCB) is key to the provision of the highest levels of assurance (B3 and A1). This is defined as that combination of hardware, software, and firmware that is responsible for enforcing the system's security policy. An inherent engineering conflict would appear to arise in higher-assurance systems in that, the smaller the TCB, the larger the set of hardware, software, and firmware that lies outside the TCB and is, therefore, untrusted. Although this may lead the more technically naive to sophists' arguments about the nature of trust, the argument confuses the issue of "correctness" with that of "trustworthiness".
TCSEC has a precisely defined hierarchy of six evaluation classes; the highest of these, A1, is featurally identical to B3—differing only in documentation standards. In contrast, the more recently introduced Common Criteria (CC), which derive from a blend of technically mature standards from various NATO countries, provide a tenuous spectrum of seven "evaluation classes" that intermix features and assurances in a non-hierarchical manner, and lack the precision and mathematical stricture of the TCSEC. In particular, the CC tolerate very loose identification of the "target of evaluation" (TOE) and support – even encourage – an inter-mixture of security requirements culled from a variety of predefined "protection profiles." While a case can be made that even the seemingly arbitrary components of the TCSEC contribute to a "chain of evidence" that a fielded system properly enforces its advertised security policy, not even the highest (E7) level of the CC can truly provide analogous consistency and stricture of evidentiary reasoning.
The mathematical notions of trusted systems for the protection of classified information derive from two independent but interrelated corpora of work. In 1974, David Bell and Leonard LaPadula of MITRE, under the technical guidance and financial sponsorship of Maj. Roger Schell, Ph.D., of the U.S. Army Electronic Systems Command (Fort Hanscom, MA), devised the Bell-LaPadula model, in which a trustworthy computer system is modeled in terms of objects (passive repositories or destinations for data such as files, disks, or printers) and subjects (active entities that cause information to flow among objects e.g. users, or system processes or threads operating on behalf of users). The entire operation of a computer system can indeed be regarded as a "history" (in the serializability-theoretic sense) of pieces of information flowing from object to object in response to subjects' requests for such flows. At the same time, Dorothy Denning at Purdue University was publishing her Ph.D. dissertation, which dealt with "lattice-based information flows" in computer systems. (A mathematical "lattice" is a partially ordered set, characterizable as a directed acyclic graph, in which the relationship between any two vertices either "dominates", "is dominated by," or neither.) She defined a generalized notion of "labels" that are attached to entities—corresponding more or less to the full security markings one encounters on classified military documents, e.g. TOP SECRET WNINTEL TK DUMBO. Bell and LaPadula integrated Denning's concept into their landmark MITRE technical report—entitled, Secure Computer System: Unified Exposition and Multics Interpretation. They stated that labels attached to objects represent the sensitivity of data contained within the object, while those attached to subjects represent the trustworthiness of the user executing the subject. (However, there can be a subtle semantic difference between the sensitivity of the data within the object and the sensitivity of the object itself.)
The concepts are unified with two properties, the "simple security property" (a subject can only read from an object that it dominates [is greater than is a close, albeit mathematically imprecise, interpretation]) and the "confinement property," or "*-property" (a subject can only write to an object that dominates it). (These properties are loosely referred to as "no read-up" and "no write-down," respectively.) Jointly enforced, these properties ensure that information cannot flow "downhill" to a repository where insufficiently trustworthy recipients may discover it. By extension, assuming that the labels assigned to subjects are truly representative of their trustworthiness, then the no read-up and no write-down rules rigidly enforced by the reference monitor are sufficient to constrain Trojan horses, one of the most general classes of attacks (sciz., the popularly reported worms and viruses are specializations of the Trojan horse concept).
The Bell-LaPadula model technically only enforces "confidentiality" or "secrecy" controls, i.e. they address the problem of the sensitivity of objects and attendant trustworthiness of subjects to not inappropriately disclose it. The dual problem of "integrity" (i.e. the problem of accuracy, or even provenance of objects) and attendant trustworthiness of subjects to not inappropriately modify or destroy it, is addressed by mathematically affine models; the most important of which is named for its creator, K. J. Biba. Other integrity models include the Clark-Wilson model and Shockley and Schell's program integrity model, "The SeaView Model"
An important feature of MACs, is that they are entirely beyond the control of any user. The TCB automatically attaches labels to any subjects executed on behalf of users and files they access or modify. In contrast, an additional class of controls, termed discretionary access controls(DACs), are under the direct control of system users. Familiar protection mechanisms such as permission bits (supported by UNIX since the late 1960s and – in a more flexible and powerful form – by Multics since earlier still) and access control list (ACLs) are familiar examples of DACs.
The behavior of a trusted system is often characterized in terms of a mathematical model. This may be rigorous depending upon applicable operational and administrative constraints. These take the form of a finite state machine (FSM) with state criteria, state transition constraints (a set of "operations" that correspond to state transitions), and a descriptive top-level specification, DTLS (entails a user-perceptible interface such as an API, a set of system calls in UNIX or system exits in mainframes). Each element of the aforementioned engenders one or more model operations.
Trusted systems in trusted computing
The Trusted Computing Group creates specifications that are meant to address particular requirements of trusted systems, including attestation of configuration and safe storage of sensitive information.
Trusted systems in policy analysis
In the context of national or homeland security, law enforcement, or social control policy, trusted systems provide conditional prediction about the behavior of people or objects prior to authorizing access to system resources. For example, trusted systems include the use of "security envelopes" in national security and counterterrorism applications, "trusted computing" initiatives in technical systems security, and credit or identity scoring systems in financial and anti-fraud applications. In general, they include any system in which
probabilistic threat or risk analysis is used to assess "trust" for decision-making before authorizing access or for allocating resources against likely threats (including their use in the design of systems constraints to control behavior within the system); or
deviation analysis or systems surveillance is used to ensure that behavior within systems complies with expected or authorized parameters.
The widespread adoption of these authorization-based security strategies (where the default state is DEFAULT=DENY) for counterterrorism, anti-fraud, and other purposes is helping accelerate the ongoing transformation of modern societies from a notional Beccarian model of criminal justice based on accountability for deviant actions after they occur to a Foucauldian model based on authorization, preemption, and general social compliance through ubiquitous preventative surveillance and control through system constraints.
In this emergent model, "security" is not geared towards policing but to risk management through surveillance, information exchange, auditing, communication, and classification. These developments have led to general concerns about individual privacy and civil liberty, and to a broader philosophical debate about appropriate social governance methodologies.
Trusted systems in information theory
Trusted systems in the context of information theory are based on the following definition:
In information theory, information has nothing to do with knowledge or meaning; it is simply that which is transferred from source to destination, using a communication channel. If, before transmission, the information is available at the destination, then the transfer is zero. Information received by a party is that which the party does not expect—as measured by the uncertainty of the party as to what the message will be.
Likewise, trust as defined by Gerck, has nothing to do with friendship, acquaintances, employee-employer relationships, loyalty, betrayal and other overly-variable concepts. Trust is not taken in the purely subjective sense either, nor as a feeling or something purely personal or psychological—trust is understood as something potentially communicable. Further, this definition of trust is abstract, allowing different instances and observers in a trusted system to communicate based on a common idea of trust (otherwise communication would be isolated in domains), where all necessarily different subjective and intersubjective realizations of trust in each subsystem (man and machines) may coexist.
Taken together in the model of information theory, "information is what you do not expect" and "trust is what you know". Linking both concepts, trust is seen as "qualified reliance on received information". In terms of trusted systems, an assertion of trust cannot be based on the record itself, but on information from other information channels. The deepening of these questions leads to complex conceptions of trust, which have been thoroughly studied in the context of business relationships. It also leads to conceptions of information where the "quality" of information integrates trust or trustworthiness in the structure of the information itself and of the information system(s) in which it is conceived—higher quality in terms of particular definitions of accuracy and precision means higher trustworthiness.
An example of the calculus of trust is "If I connect two trusted systems, are they more or less trusted when taken together?".
The IBM Federal Software Group has suggested that "trust points" provide the most useful definition of trust for application in an information technology environment, because it is related to other information theory concepts and provides a basis for measuring trust. In a network-centric enterprise services environment, such a notion of trust is considered to be requisite for achieving the desired collaborative, service-oriented architecture vision.
See also
Accuracy and precision
Computer security
Data quality
Information quality
Trusted Computing
References
External links
Global Information Society Project – a joint research project
Conceptual systems
Security
Computational trust |
6017171 | https://en.wikipedia.org/wiki/Automatix%20%28software%29 | Automatix (software) | Automatix is a tool designed to automate the addition of applications, codecs, fonts and libraries not provided directly by the software repositories of Debian-based distributions (specifically Debian, MEPIS and Ubuntu).
These distributions do not include certain packages or configuration settings that allow the playing of DVDs or MP3 files or the viewing of Adobe Flash content, for example. Packages that allow the playing of MP3s are available to download from official sources but cannot be included on the CD. Packages to enable the playing of DVDs include the libdvdcss algorithm. Although adding these manually is possible, it can be time consuming. This is a particular problem for distributions aimed at simplifying the desktop Linux experience.
Design
Automatix allows the menu-driven installation of 56 different "capabilities", including commercial closed source programs such as the Flash plugin, Acrobat Reader, multimedia codecs (DivX, MP3, Windows Media Audio) and fonts, and programming compilers.
Automatix was not recommended by the Ubuntu development team, which has criticised its content. Some individual Ubuntu developers blamed Automatix 1 for breaking updates from Dapper to Edgy. On 2 November 2006 Ubuntu CTO Matt Zimmerman said "I cannot recommend the use of this program, and systems where it has been used cannot be supported with a clean and official upgrade path."
On August 4, 2007, Automatix was reviewed by Matthew Garrett, a member of the core Ubuntu development team. In his words, “Automatix is, in itself, a poor quality package which fails to conform to Debian or Ubuntu policy.” These comments were made in a technical analysis posted on his blog explaining why Automatix is currently not supported by Canonical Ltd. or the Ubuntu community.
Successors
Automatix was discontinued in early 2008.
A new project called Ultamatix, compatible with Ubuntu 9.04, is based on Automatix.
References
External links
Official Website (archived at web.archive.org)
Review
Linux configuration utilities
Linux package management-related software |
3350689 | https://en.wikipedia.org/wiki/Anthony%20Davis%20%28running%20back%2C%20born%201952%29 | Anthony Davis (running back, born 1952) | Anthony Davis (born September 8, 1952), also known as A.D., is a former American football running back. He played in four professional leagues: the World Football League (WFL), Canadian Football League (CFL), National Football League (NFL), and United States Football League (USFL).
Davis played college football and baseball at the University of Southern California (USC), where he was part of five national championships, two in football and three in baseball.
College career
Davis was a consensus All-American in 1974, and led the USC Trojans in rushing, scoring and kick return yardage for three consecutive seasons. He is especially remembered for scoring 11 touchdowns in three games against Notre Dame. In a 45–23 USC win on December 2, 1972, he scored six touchdowns which set a school single game record. Two of those scores came on kickoff returns. He returned the opening kickoff 97 yards for a touchdown after Notre Dame won the coin toss and chose to kick. After Notre Dame scored on a short pass and narrowed the Trojans' lead, he returned the following kickoff 96 yards for a touchdown. In this game, Davis had three kickoff returns for a total of 218 yards, an average of 72.7 yards per return. This set an NCAA record for the highest average gain per return in a single game. In his career as a Trojan he returned 37 kickoffs for 1,299 yards, an NCAA record 35.1 yard average. His six career kickoff returns for touchdowns set an NCAA record which stood until 2009, when it was broken by C. J. Spiller of Clemson. Davis' kickoff return average of 42.5 yards in 1974, is the highest kickoff return average for any single season leader ever. He was also the first Pacific-8 Conference player to rush for more than 1,000 yards in three consecutive seasons – 1,191 in 1972; 1,112 in 1973 and 1,469 in 1974. For his career at USC, he carried the ball 784 times for 3,772 yards and 44 touchdowns. Davis was also a repeat (1973, 1974) first team All-Pac-8 Conference selection. He was also the third multiple recipient of the W.J. Voit Memorial Trophy, awarded each year to the outstanding football player on the Pacific Coast. Davis won the Voit trophy in 1972 and 1974.
On November 30, 1974, he started an amazing rally which brought the Trojans back from a 24–0 second quarter deficit against #4 ranked Notre Dame to a 55–24 win. Just before halftime, he scored on a seven-yard lateral pass from quarterback Pat Haden. Davis found paydirt a second time on a 102-yard kickoff return to open the second half. With only 3:25 elapsed in the third quarter, Davis scored a third touchdown on a six-yard run. Then with still 8:37 left in the same quarter, Davis added his fourth and final touchdown of the game on a four-yard dash, dropped to his knees, went into his "endzone dance", then added a two-point conversion and the Trojans had the lead 27-24. Incredibly, Davis had scored 26 of the Trojans' first 27 points.
In 1974, Heisman Trophy ballots were due prior to the end of the season and before that year's USC-Notre Dame game. He finished second in the voting to Archie Griffin of Ohio State. Afterward, Heisman voting took place after all the regular season games had been played. From 1972–1974, with Davis as the tailback the Trojans compiled a record, three conference titles, two Rose Bowl victories in three appearances and two national championships. He accumulated 24 school, conference, and NCAA records, including over 5,400 all-purpose yards and 52 touchdowns.
Davis' talents were not just limited to football, he was also successful in baseball as an outfielder and switch-hitter on USC's 1972, 1973, and 1974 College World Series champion baseball teams. Playing with wood bats at the time, Davis hit .273 with six home runs, 45 RBIs and 13 stolen bases for the Trojans' 1974 team.
During his Trojan career, Davis won five national championships – two in football, three in baseball. As a two-sport standout, Davis holds the distinction of being the only player in school history to start for a national champion football team (1972) and a national champion baseball team (1974). He did not finish his degree at USC.
The Notre Dame vs. USC game on November 27, 2004 was titled "Anthony Davis Day", in recognition of the 30th anniversary of the record-breaking game.
While at USC, Davis was on the cover of Sports Illustrated magazine three times, including one foldout. He was inducted into the College Football Hall of Fame in late 2005 in New York City, and enshrined on August 12, 2006, in South Bend, Indiana.
Professional career
The Minnesota Twins selected him in the fourth round of the 1975 January amateur entry draft (83rd overall pick) for Major League Baseball (MLB); however he rejected them, thinking they would be unable to meet his salary demands.
World Football League
Davis was selected by the New York Jets in the second round of the 1975 NFL Draft, 37th overall. At the time, the Jets had veteran quarterback Joe Namath and offered a major stage, but the team's management were not willing to give in to his contract demands. In 1975, Davis opted to play for the Southern California Sun of the upstart World Football League (WFL); he signed a five-year, $1.7-million deal that reportedly included a $200,000 cash bonus and a Rolls-Royce.
He led the WFL in rushing with 1,200 yards on 239 carries and 16 touchdowns at the time of its demise. He also caught 40 passes for 381 yards and one touchdown, while on kickoff returns he ran back 9 for 235 yards and one touchdown. In all, he scored 18 TDs in the WFL for 133 points. His 16 touchdowns for rushing over 12 games is a WFL record. He also threw the ball and completed four of eleven attempts for 102 yards and a touchdown. The league folded during the season in October, and Davis moved on.
Canadian Football League
Davis headed to the Canadian Football League in 1976, and became the league's first "million dollar man." His time with the Toronto Argonauts was not happy; his star ego clashed with CFL legend and Argos' head coach Russ Jackson's idea of a team player. He rushed 104 times for 417 yards, caught 37 passes for 408 yards, and scored four touchdowns.
During the final regular season game against the Hamilton Tiger-Cats (in Hamilton, Ontario), Argonauts quarterback Matthew Reed, desperate to find an open receiver, threw an incomplete pass to Davis. When Reed returned to the bench, assistant coach Joe Moss told him never to throw the ball to Davis again; he had one carry and called himself the most expensive passing decoy in football.
National Football League
The Tampa Bay Buccaneers had acquired the NFL rights to Davis in the 1976 NFL expansion draft, with his old USC head coach John McKay hoping to turn some new magic, but Davis' NFL career was a disappointment. Tampa Bay had lost all fourteen games in 1976, and injuries to the Bucs' top two quarterbacks in the preseason put extra pressure on the offense. In eleven games for the Bucs in 1977, he rushed 95 times for 297 yards (3.1 yard average), caught eight passes, and scored a touchdown.
In , Davis played two games for the Houston Oilers and two games for the Los Angeles Rams, where he rushed three times for seven yards.
United States Football League
In the spring of 1983 at age thirty, over four years after he last played with the Rams, Davis had a short stint with the Los Angeles Express of the new USFL, rushing twelve times for 32 yards.
After football
Following his football career, Davis found initial success as a real estate developer in the 1980s and early 1990s, while also occasionally acting in minor film and television roles.
In 1990, Davis fulfilled a long-time dream and started playing professional baseball in the short-lived Senior Professional Baseball Association, playing as an outfielder for the San Bernardino Pride club. The Pride had a record of 13-12 and were in third place when the league canceled the season on December 26, less than the halfway point in a planned 56-game schedule.
See also
List of NCAA major college yearly punt and kickoff return leaders
References
External links
1952 births
Living people
African-American players of American football
African-American players of Canadian football
All-American college football players
American football running backs
Baseball outfielders
Baseball players from Texas
Canadian football running backs
College Football Hall of Fame inductees
Houston Oilers players
Los Angeles Express players
Los Angeles Rams players
People from Huntsville, Texas
Players of American football from Texas
San Bernardino Pride players
Sportspeople from Irvine, California
Southern California Sun players
Tampa Bay Buccaneers players
Toronto Argonauts players
USC Trojans baseball players
USC Trojans football players
21st-century African-American people
20th-century African-American sportspeople |
11338374 | https://en.wikipedia.org/wiki/PCPaint | PCPaint | PCPaint was the first IBM PC-based mouse-driven GUI paint program. It was developed by John Bridges and Doug Wolfgram. It was later developed into Pictor Paint.
The hardware manufacturer Mouse Systems bundled PCPaint with millions of computer mice that they sold, making PCPaint also the best-selling DOS-based paint program of the late 1980s.
Background
During the dawn of the IBM PC age in 1981, Doug Wolfgram purchased a Microsoft Mouse and decided to write a drawing program for it. The interface was primitive but the program functioned well. In February 1983, Wolfgram traveled to SoftCon in New Orleans where he demonstrated the program to Mouse Systems. Mouse Systems was developing an optical mouse and they wanted to bundle a painting program so they agreed to bundle in Mouse Draw. The original program was written entirely in Assembly language with primitive graphics routines developed by Wolfgram.
In 1982 John Bridges worked for an educational software company, Classroom Consortia Media, Inc., developing and writing Apple and IBM graphics libraries for CCM's software. Bridges and Wolfgram were friends who had been connected through a bulletin board system developed and run by Wolfgram. The two collaborated cross country via the BBS, Wolfram in California and Bridges in New York.
Apple was by this time hard at work on their new computer, Macintosh, and Mouse Systems wanted the new paint program to capture the look and feel of MacPaint. Wolfgram contacted Bridges and the two agreed to develop the commercial version of PCPaint, as it was to be called by Mouse Systems. John Bridges and Doug Wolfgram started reworking Mouse Draw into what became the world's first commercial GUI painting program for the PC. The program was completely re-written using Bridge's graphics library and the top-level elements were written in C rather than Assembly language. Bridges developed the core graphics code for the first version of PCPaint while Wolfgram worked on the user interface and top-level code. Mouse Systems signed an exclusive agreement with Wolfgram's company, Microtex Industries, Inc., to bundle PCPaint with every mouse they sold.
In early 1987, Mouse Systems decided that Paint Programs weren't helping to sell mice any longer so they discontinued the bundle deal and returned rights to the code to MicroTex Industries, but retained rights to the name, PCPaint. Wolfgram then combined the paint program with a new animation system he was developing (called GRASP) and Paul Mace Software bought publishing rights to the animation system and PCPaint, which was to be renamed Pictor. Bridges again got involved and took over programming responsibilities for GRASP as well as PCPaint while Wolfgram focused on more of the business details.
In creating the first version of PCPaint, Doug had a dual floppy machine with a Computer Innovations compiler on one disk and source code on the other. John had the "luxury" of a 10MB hard disk in his XT. Data was exchanged daily via 1200, then 2400 baud modems.
Authorship and Ownership
John Bridges and Wolfgram continued to work on PCPaint and GRASP on behalf of Paul Mace Software until 1990. Also in that year, Doug Wolfgram sold his remaining rights to PCPaint (and its animation system, GRASP) to John Bridges.
In 1994, GRASP development stopped and so did development of Pictor Paint. John Bridges terminated his GRASP publishing contract with Paul Mace Software, and went off to create GLPro (the next generation of GRASP) with GMEDIA. Along with GLPro, came GLPaint, the successor to PCPaint and Pictor Paint.
Versions
In June 1984, Mouse Systems shipped PCPaint 1.0, the first GUI based Paint program for the IBM PC family of computers. John Bridges and Doug Wolfgram, were the co-authors of PCPaint 1.0. PCPaint 1.0 saved its graphics in a modified BSaved image format with the extension of ".PIC".
The release of PCPaint Version 1.5 followed in late 1984, with the additions of graphics image compression for the .PIC format and support for "larger-than-screen" images. PCjr support was also added in this version after overcoming severe memory shortage problems getting PCPaint to run on the 128k PCjr.
October 1985 saw the release of PCPaint 2.0. EGA support and publishing features were added to this version. The .PIC format was further refined, offering support for the rapidly expanding graphics capabilities of the PC and efficient image compression.
PCPaint 3.1 was released in 1989. Unlike previous versions, it was not bundled with mice but was sold as a stand-alone software product. PCPaint 3.1 offered improved text and image handling, provided 36 types of flood and fill, worked with VGA adapters in hi-res 16-color and 256-color modes, allowed the user to save and retrieve files in a variety of intercompatible formats (.PIC, .GIF, .PCX, .IMG), and printed selected portions of images on color or black and white dot matrix, ink jet, and laser printers such as PostScript and HP Laser Jet. PCPaint 3.1 is still in use today by some users of DOS emulation programs like DOSBox and available for free download.
Pictor Paint was an improved version, written by John Bridges, and bundled with GRASP GRaphical System for Presentation also written by John Bridges. It was also called "The Painter's Easel".
GLPaint, released in 1995, was the last in this series of paint programs written by John Bridges. By 1998 version 7.0 provided support for TrueColor images and the Pictor PIC format was expanded to handle these.
Pictor PIC Image Format
PCPaint 1.0 saved its graphics in a modified BSAVE image format (which was popular at the time) with the file type (extension) of ".PIC". By PCPaint 1.5 this format was extended further to accommodate image compression. With the release of version 2.0 the PICtor PIC image format was developed almost to its present state, with no similarity to the BSAVE format used by earlier versions.
Pictor Paint saved its files in a compressed format with the file extension PIC, which was the same format used by PCPaint.
See also
GLPro
References
Bibliography
Encyclopedia of Graphics File Formats, 2nd Edition by Murray, James D., Van Ryper, William,
The Graphics File Formats Page, GL - Another animation format, Dr. Martin Reddy, Technical Lead, R & D, Pixar Animation Studios
The History of GLPRO, by G-media/IMS, GLPro Mailing List Archive
External links
PC Paint 3.1 | shdon.com - links to a ZIP file named "PCPaint31-Installed.zip", PC Paint 3.1 can be run in DOSBox after extracting
PCPaint GRASP GLPro History
Doug Wolfgram on PCPaint's beginnings
Info on .PIC and .CLP image formats
"Doug and Melody Wolfgram", by Cynthia Gregory Wilson
Dan's 20th Century Abandonware
GRASP File Format Summary
1984 software
Proprietary software
Graphics software
Raster graphics editors
DOS software |
36197065 | https://en.wikipedia.org/wiki/Smudge%20attack | Smudge attack | A smudge attack is an information extraction attack that discerns the password input of a touchscreen device such as a cell phone or tablet computer from fingerprint smudges. A team of researchers at the University of Pennsylvania were the first to investigate this type of attack in 2010. An attack occurs when an unauthorized user is in possession or is nearby the device of interest. The attacker relies on detecting the oily smudges produced and left behind by the user's fingers to find the pattern or code needed to access the device and its contents. Simple cameras, lights, fingerprint powder, and image processing software can be used to capture the fingerprint deposits created when the user unlocks their device. Under proper lighting and camera settings, the finger smudges can be easily detected, and the heaviest smudges can be used to infer the most frequent input swipes or taps from the user.
Smudge attacks are particularly successful when performed on devices that offer personal identification numbers (PINs), text-based passwords, and pattern-based passwords as locking options. There are various proposed countermeasures to mitigate attacks, such as biometrics, TinyLock, and SmudgeSafe, all which are different authentication schemes. Many of these methods provide ways to either cover up the smudges using a stroking method or implement randomized changes so previous logins are different from the current input.
Background
The smudge attack method against smartphone touch screens was first investigated by a team of University of Pennsylvania researchers and reported at the 4th USENIX Workshop on Offensive Technologies. The team classified the attack as a physical side-channel attack where the side-channel is launched from the interactions between a finger and the touchscreen. The research was widely covered in the technical press, including reports on PC Pro, ZDNet, and Engadget. The researchers used the smudges left behind on two Android smartphones and were able to break the password fully 68% of the time and partially 92% of the time under proper conditions.
Once the threat was recognized, Whisper Systems introduced an app in 2011 to mitigate the risk. The app provided their own versions of a pattern lock and PIN authentication that required users to complete certain tasks to cover up the smudges created during the authentication process. For the PIN verification option, the number options were vertically lined-up, and user were required to swipe downward over the smudged area. For the pattern lock, the app presented a 10x10 grid of stars the users had to swipe over and highlight before accessing the home screen.
Dangers
Interpreting the smudges on the screen is a relatively easy task for attackers, and the ramifications of an attack can negatively affect the victim. The smudge attack approach could also be applied to other touchscreen devices besides mobile phones that require an unlocking procedure, such as automated teller machines (ATMs), home locking devices, and PIN entry systems in convenience stores. Those who use touchscreen devices or machines that contain or store personal information are at a risk of data breaches. The human tendency for minimal and easy-to-remember PINs and patterns also lead to weak passwords, and passwords from weak password subspaces increase the ease at which attackers can decode the smudges.
Smudge attacks are particularly dangerous since fingerprint smudges can be hard to remove from touchscreens, and the persistence of these fingerprints increases the threat of an attack. The attack does not depend on finding perfect smudge prints, and it is still possible for attackers to figure out the password even after cleaning the screen with clothing or with overlapping fingerprints. Cha et al. in their paper, "Boosting the Guessing Attack Performance on Android Lock Patterns with Smudge Attacks," tested an attack method called smug that combined smudge attacks and pure guessing attacks. They found that even after the users were asked to use the Facebook app after unlocking the device, 31.94% of the phones were cracked and accessed.
Another danger of smudge attacks is that the basic equipment needed to perform this attack, a camera and lights, is easily obtainable. Fingerprint kits are also an accessible and additional, but not required, piece of equipment ranging from $30-$200. These kits increase the ease with which an attacker can successfully break into a phone in possession.
Types of attackers
The team at the University of Pennsylvania identified and considered two types of attackers: passive and active.
Active
An active attacker is classified as someone who has the device in hand and is in control of the lighting setup and angles. These attackers can alter the touchscreen in a way to better identify the PIN or pattern code by cleaning or using fingerprint powder. A typical setup from an active attacker could include a mounted camera, the phone placed on a surface, and a single light source. Slight variations in the setup include the type and size of the light source and the distance between the camera and the phone. A more experienced attacker would pay closer attention to the angle of the light and camera, the lighting source, and the type of camera and lens used to get the best picture, taking into account the shadows and highlights when the light reflects.
Passive
A passive attacker is an observer who does not have the device in hand and instead has to perform an eavesdropping-type attack. This means they will wait for the right opportunity to collect the fingerprint images until they can get in possession of the gadget. The passive attacker does not have control of the lighting source, the angle, the position of the phone, and the condition of the touchscreen. They are dependent on the authorized user and their location to get a good quality picture to crack the security code later on.
Methods and techniques
There are different steps and techniques that attackers use to isolate the fingerprint smudges to determine the lock pattern or PIN. The attacker first has to identify the exact touch screen area, any relevant smudges within that area, and any possible combination or pattern segments.
Preprocessing
In the cases where the fingerprints are not super visible to the eye, preprocessing is used to identify the most intact fingerprints determined by the number of ridge details they have. Selecting the fingerprints with the most ridge details differentiates between the user's fingerprints and those with whom the device is shared. When pressing a finger down on the touch screen surface to create a fingerprint, the liquid from the edges of the ridges fill in the contact region. This fingerprint liquid is made up of substances from the epidermis, the secretory glands, and extrinsic contaminants such as dirt or outside skin products. As the fingertip is lifted, the liquid also retracts, leaving behind the leftover traces. Attackers are able to use fingerprint powder to dust over these oil smudges to unveil the visible fingerprint and their ridges. The powder can enhance the diffuse reflection, which reflects from rough surfaces and makes the dusted smudge more visible to the human eye. There are different powders to choose from based on the colors that best contrasts with the touchscreen and the environment. Examples of powders are aluminum, bronze, cupric oxide, iron, titanium dioxide, graphite, magnetic, and fluorescent powder. This dusting action also mimics the processes used in a crime scene investigation.
Preserving fingerprints
Preserving fingerprints utilizes a camera to capture multiple pictures of the fingerprint images or the keypad with different light variations. Generally, high-resolution cameras and bright lights work the best for identifying smudges. The goal is to limit any reflections and isolate the clear fingerprints.
Visibility of objects
The visibility of the fingerprint relies on the light source, the reflection, and shadows. The touch screen and surface of a smart device can have different reflections that change how someone views the image of the fingerprint.
Diffuse Reflection : Incident rays that are reflected at many angles and produced from rough surfaces. Diffuse reflection of light reflects the image of the fingerprint that the human eye can see. The techniques used in preprocessing and strong light enhances the diffuse reflection for a clearer photo.
Specular Reflection : Incident rays are reflected at one angle and produced from smooth surfaces. Specular reflection of light reflects a "virtual" image (since it doesn't produce light) that seems to come from behind the surface. An example of this is a mirror.
Mapping fingerprints to keypad
Fingerprint mapping uses the photographed smudge images to figure out what keys were used by laying the smudge images over the keypad or by comparing the image with a reference picture. Mapping the positions of smudges helps the attacker figure out which tapped keys were used by the authorized user. First, the fingerprints and keypad images are resized and processed to find the areas the corresponding fingerprints and keys occupy. Next, the Laplace edge detection algorithm is applied to detect the edges of the ridges of a finger, sharpen the overall fingerprint, and eliminate any of the background smudges. The photo is then converted into a binary image to create a contrast between the white fingerprints and the black background. Using this image with grid divisions also helps clarify where the user has tapped based on the locations with the largest number of white dots in each grid area.
Differentiating between multiple fingerprints
In the case that there are multiple users, grouping fingerprints can help classify which ones belong to each person. Fingerprints have both ridges and valleys, and differentiating them is determined by the overall and local ridge structure. There are three patterns of fingerprint ridges– arch, loop, and whorl– that represent the overall structure, and the ridge endings or bifurcation represent the local structure or minutiae points. Different algorithms incorporate these fingerprint traits and structure to group the fingerprints and identify the differences. Some examples of algorithms used are Filterbank, adjacent orientation vector (AOV) system, and correlation-filter.
Filterbank requires whole fingerprints and cannot identify just the tips of the finger since it uses both the local and overall structure. The algorithm works by selecting a region of interest and dividing it into sectors. A feature vector with all the local features is formed after filtering each sector, and the Euclidean distance of the vectors of two fingerprint images can be compared to see if there is a match.
Adjacent orientation vector system matches fingerprints based only on the number of minutiae pairs and the finger details rather than the global/overall structure of the finger. The algorithm works by numbering all of the ridges of the minutiae pairs and creating an AOV consisting of that number and the difference between adjacent minutiae orientations. The AOV score or distance of the two fingerprints are computed and checked against a threshold after fine matching to see if the fingerprints are the same.
Correlation filter works with both whole fingers and fingertips. This algorithm works by using a correlation filter or training image of the fingerprint to the image to find the local and overall ridge pattern and ridge frequency. When verifying a fingerprint, the transformation is applied to the test image and multiplied by the results of applying the correlation filter on the person of interest. If the test subject and template match, there should be a large result.
Smudge-supported pattern guessing (smug)
Smug is a specific attack method that combines image processing with sorting patterns to figure out pattern-based passwords. First, the attackers take a picture of the smudge area using an appropriate camera and lighting. Using an image-matching algorithm, the captured image is then compared to a reference picture of the same device to properly extract a cropped picture focused on the smudges. Next, the smudge objects are identified using binary, Canny edge detection, and Hough transformation to enhance the visibility of the fingerprint locations. Possible segments between the swipes and points are detected with an algorithm to form the target pattern. The segments are then filtered to remove unwanted and isolated edges to only keep the edges that follow the segment direction. These segments are identified by figuring out if the smudge between two grid points is part of a pattern after comparing the number of smudge objects against the set threshold. Lastly, these segments are used in a password model to locate potential passwords (e.g. n-gram Markov model). An experiment conducted found that this method was successful in unlocking 360 pattern codes 74.17% of the time when assisted by smudge attacks, an improvement from 13.33% for pure guessing attacks.
Types of vulnerable security methods
Smudge attacks can be performed on various smart device locking methods such as Android Patterns, PINs, and text-based passwords. All of these authentication methods require the user to tap the screen to input the correct combination, which leads to susceptibility to smudge attacks that look for these smudges.
Personal Identification Numbers (PINs)
Main Article: Personal Identification Numbers
A PIN is a four or six number code unique to the individual and is one of the most widely used authentication method for mobile phones at 78% of mobile phone users utilizing this function. Four-digit PINs are mainly used by English users and six-digit PINs are used by users in Asia. There are only 10 number options to choose from, and four-digit PINs have 10,000 different number combinations and six-digit PINs have 1,000,000. PINs are not only susceptible to smudge attacks but other attacks possible through direct observation like shoulder-surfing attacks or just pure guessing like brute-force attacks. They are also used heavily in electronic transactions or for using ATMs and other banking situations. If a PIN is shared or stolen, the device or machine cannot detect whether the user is the rightful owner since it only relies on if the correct number is inputted. In relation to smudge attacks, this allows attackers to easily steal information since there is no other way to authenticate the user for who they actually are.
Text-based passwords
Main Article: Passwords
Text-based passwords is a popular type of security measure that people use to lock their phones in an alphanumeric way. Users can use any combination of numbers, uppercase and lowercase letters, punctuation, and special characters to create their passwords. Touchscreen devices that use text-based passwords will contain fingerprint smudges in the location of corresponding numbers or letters on the alphanumeric keypad. Attackers can use this to perform the smudge attack. The downfall to text-based passwords is not only its vulnerability to smudge attacks but also the tendency of users to forget the password. This causes many users to use something that is easy to remember or to reuse multiple passwords across different platforms. These passwords fall under what is called a weak password subspace within the full password space and makes it easier for attackers to break in through brute-force dictionary attacks. An early study reviewed 3289 passwords, and 86% of them had some sort of structural similarity such as containing dictionary words and being short.
Draw-a-Secret (DAS)
Main Article: Draw-a-Secret
Draw-a-Secret is a graphical authentication scheme that requires the users to draw lines or points on a two-dimensional grid. A successful authentication depends on if the user can exactly replicate the path drawn. Android Pattern Password is a version of Pass-Go that follows the concept of DAS.
Pass-Go
Pass-Go uses a grid so that there isn’t a need to store a graphical database and allows the user to draw a password as long as they want. Unlike DAS, the scheme relies on selecting the intersections on a grid instead of the cells on the screen, and users can also draw diagonal lines. Tao and Adam who proposed this method found that over their three month study, many people drew longer pattern passwords, which goes against the tendency to choose minimal and easy-to-remember passwords.
Android Pattern passwords
Android pattern lock is a graphical password method introduced by Google in 2008 where users create a pattern on a line-connecting 3x3 grid. About 40% of Android users use pattern lock to secure their phones. There are 389,112 possible patterns that the user can draw up. Each pattern must contain at least 4 points on the grid, use each contact point once, and cannot skip intermediate points between points unless it's been used earlier. Touchscreen devices that use Android pattern lock will leave behind swipes that give away the right location and combination an attacker needs to unlock the phone as an unauthorized user. The security of Android pattern lock against smudge attacks was tested by researchers at the University of Pennsylvania, and from the swipes left behind from the drawn pattern, they were able to discern the code fully 68% of the time and partially 92% of the time under proper conditions.
Countermeasures
Physiological biometrics such as Android Face Unlock, iPhone Touch ID and Face ID, and Trusted Voice have been recently implemented in mobile devices as the main or alternative method of validation. There are also other novel ways that have potential to be a future security scheme but haven't been implemented yet into mainstream usage. Some of these ways avoid the requirement to input anything with their fingers and thus eliminating the ability for attackers to use smudges to determine the password lock.
Strong passwords
Although there are many countermeasures that help protect against smudge attacks, creating secure passwords can be the first step to protecting a device. Some of the recommended steps are:
Passwords should be at least 8 characters long. A longer password strays away from the weak password subspace and makes it harder for the attacker to interpret more fingerprint smudges
Avoid using words in the dictionary as they can be more common and make the password weak.
Change passwords frequently.
Use randomly generated passwords. Random passwords prevent a user from selecting commonly used and easy-to-remember words that are easily susceptible to attacks.
Avoid using the same password for every security authentication system. This prevents attackers from accessing other information if they happen to discover one of the passwords.
Although these are the recommended tips for stronger passwords, users can run out of strong password options they will remember and later forget the passcode after frequent changes. To avoid this, users tend to choose short, weaker passwords to make it more convenient and shorten the unlocking time.
Anti-fingerprint protection
Researchers have looked into anti-fingerprint properties that can allow people to keep their current password schemes and not worry about the leftover smudges. Surfaces that are able to repel the water and oils from the finger are called amphiphobic. Surfaces that have low surface energy and surface transparency (low roughness) are typically anti-smudge due to their higher contact angles and low molecular attraction. Low molecular attraction means that there is little to no adhesion for the oil and water molecules to bind to the surface and leave behind a trace. However, achieving these properties while still functioning as a touchscreen is hard as the low surface energy alters the durability and functionality of the touchscreen itself.
With this research, various anti-smudge screen protectors have been put on the market such as Tech Armor's anti-glare and anti-fingerprint film screen protector and ZAGG's InvisibleShield Premium Film and Glass Elite (tempered glass) antimicrobial screen protectors. ZAGG markets its InvisibleShield as smudge resistant, glare resistant, and scratch proof. These phone accessories can range from 30 to 60 dollars.
There have also been various smartphones on the market that have been pitched as having an oleophobic coating, which resists oil to keep the touchscreen free from fingerprints. The oleophobic screen beads up any oil residuals, preventing them from sticking to the surface and making it easy to wipe finger residuals off without smearing. In July 2016, Blackberry released the DTEK50 smartphone with an oleophobic coating. Other phone developers have used this for the touchscreens of their devices such as Apple's many generations of iPhones, Nokia, and Lumia. and HTC Hero.
Biometrics
Main Article: Biometrics
Biometrics is a type of authentication that identifies a user based on their behavior or physical characteristics, such as keystrokes, gait, and facial recognition rather than what one can recall or memorize. A biometrics system takes the unique features from the individual and records them as a biometric template, and the information is compared with the current captured input to authenticate a user. Biometrics is categorized as either physiological or behavioral by the US National Science and Technology Council’s Subcommittee (NSTC) on Biometrics. This type of security can serve as a secondary protection to traditional password methods that are susceptible to smudge attacks on their own since it doesn't rely on entering a memorized number or pattern or recalling an image. Research conducted on biometric authentication found that a mix or hybrid of biometrics and traditional passwords or PINs can improve the security and usability of the original system.
One of the downsides to biometrics is mimicry attacks where the attackers mimic the user. This can increase the vulnerability of the device if attackers turn to methods that allow them to copy the victim’s behavior. Some of these methods include using a reality-based app that guide attackers when entering the victim’s phone or using transparent film with pointers and audio cues to mimic the victim’s behavior. Another vulnerability is that the biometric template can be leaked or stolen through hacking or other various means to unauthorized people. A possible solution to any theft, leak, or mimicry are fingerprint template protection schemes as they make it difficult for attackers to access the information through encryption and added techniques.
Physiological
Physiological biometrics authenticates a user based on their human characteristics. Measuring the characteristics unique to each individual creates a stable and mostly consistent mechanism to authenticate a person since these features do not change very quickly. Some examples of physiological biometric authentication methods are listed below.
Iris recognition
Fingerprint recognition
Hand geometry
Facial recognition
Behavioral
Behavioral biometrics authenticates a user based on the behavior, habits, and tendencies of the true user. Some examples include voice recognition, gait, hand-waving, and keystroke dynamics. The schemes listed below have been proposed to specifically protect from smudge attacks.
Touch-Interaction: Touch-interaction is a proposed way of authenticating a user based on their interactions with the touch screen such as tapping or sliding. There are two types: static that checks the user once and continuous that checks the user multiple times. The convenience of this method is that it doesn't require extra sensors and can check and monitor the user in the background without the help or attention of the user. Chao et al. describes the process in which the up, down, right, and left motions are checked in terms of the position of the finger, the length of the swipe, the angle, the time it takes, the velocity, acceleration, and finger pressure. In their conducted experiment, they tested on how usable and reliable the touch-based method is and found that all of the touch operations were stable and blocked unauthorized users with an expected error rate of 1.8%. However, there are still other factors like the smartphone type, the software, environment, familiarity of the phone, and physical state of the user that could create variability and thus a higher rate of error.
BEAT : This specific unlocking method is called BEAT, which authenticates the behavior of the user or how they perform a gesture or signature. A gesture is swiping or pinching the touch screen, and a signature scheme requires the user to sign their name. This method is secure from smudge attacks and also does not need extra hardware. BEAT works by first asking the user to perform the action 15 to 20 times to create a model based on how they performed the action to use for authentication. The features identified are velocity magnitude, device acceleration, stroke time, inter-stroke time, stroke displacement magnitude, stroke displacement direction, and velocity direction. Machine learning techniques are then applied to determine whether the user is legitimate or not. An experiment was conducted using the BEAT method on Samsung smartphones and tablets and found that after collecting 15,009 gesture samples and 10,054 signature samples, the error rate of 3 gestures is 0.5% and about 0.52% for one signature.
SmudgeSafe
SmudgeSafe is another authentication method protected from smudge attacks that uses 2-dimension image transformations to rotate, flip, or scale the image at the login screen page. The user will draw a graphical password shaper created from the points on an image as usual, but the image will look different every time the user logs in. The changes done on the image are randomized, so previous login smudges do not give hints to attackers on what the input is. To ensure that the transformations applied will significantly change the locations of the password points, the area of these specific locations on the image is restricted. In a study comparing SmudgeSafe's graphical authentication method to lock patterns and PINs, SmudgeSafe performed the best with a mean of 0.51 passwords guessed per participant. The pattern lock had a mean of 3.50 and PINs had a mean of 1.10 passwords correctly guessed per participant.
TinyLock
TinyLock was proposed by Kwon et al. and uses two grids; the top one is for the pressed cells for the confirmation process, and the bottom one is a drawing pad for the authentication process. The top grid is used to notify the user by flickering and vibrating if the user is on the correct initial dot before they start drawing. The bottom half of the screen contains a tiny 3 x 3 grid used for drawing the secret password. The grid is much smaller in size compared to traditional pattern locks, which forces the user to draw in a confined space to squeeze all the smudges in a small area. This method mitigates smudge attacks because the smudges are all smushed together, and the users are required to draw a circular virtual wheel in either direction after drawing the pattern password. However, this method is not completely free from shoulder-surfing attacks. Also, another drawback is the grid dots are hard to visualize due to the small size, which makes it difficult to draw complex patterns and unlock without error.
ClickPattern
ClickPattern uses a 3 x 3 grid labeled one through nine, and the user has to click on the nodes that correlate with the end of a drawn line to prevent swiping on the screen. Doing this creates smudges that are harder to distinguish from normal screen usage. If anything, the smudges created will reveal the nodes used but not the pattern, thus being more protected from smudge attacks than Android pattern lock. On the lock screen, ClickPattern consists of these three components:
Grid 3 x 3
Table numbered 1- 9
Okay and Undo Button
The user is authenticated when the inputted pattern is the same as the original pattern and in the same exact order and direction. To create a valid pattern, the pattern must have at least 4 points and none of them can be used more than once. The pattern will also always contain dots in between a sequence, even though it does not necessarily need to be clicked. Users can also go through previously used dots to access an unused node.
Multi-touch authentication with Touch with Fingers Straight and Together (TSFT)
This multi-touch authentication uses geometric and behavioral characteristics to verify users on a touch screen device. According to Song et al., this TFST gesture takes an average of 0.75 seconds to unlock, is very easy to use, and simple to follow. The user puts two to four fingers together in a straight position, decreasing the amount of surface compared to other multi-touch methods. With the fingers in this fixed hand posture, the user can choose to either trace a simple or complex pattern, and the screen will pick up the positions of the fingers and record each trace movement in the form of touch events. These touch events account for the X and Y-coordinates, the amount of pressure applied, the finger size, the timestamp, and the size of the touched area, and are compared to the template created during the registration process. The physiological features or hand geometry include a measurement between possible strokes from the performed gesture. Horizontal strokes track the finger length differences, and vertical strokes track the finger width. Since the user always places their fingers in a straight position, the measurements of the finger will stay the same and provide consistent verification. Lastly, there are behavioral features that are traced, specifically the length of the stroke, the time it takes, the velocity of the stroke, the tool or the area for each touch point in relation to finger size, the touch area size, the pressure applied, and the angle of the stroke. For one stroke, there are 13 behavioral features, and this increases to 26, 39, and 52 for up to four strokes.
Bend passwords
With new technology geared towards creating a flexible display for smartphone devices, there are more opportunities to create novel authentication methods. Bend passwords are an original type of password authentication used for flexible screens. It involves different bend gestures that the users perform by twisting or disfiguring the display surface, and there are a total of 20 gestures currently available. The bending can be a part of a single gesture by individually bending one of the four corners of the display or part of a multi-bend gesture by simultaneously bending pairs of corners.
Fractal-Based Authentication Technique (FBAT)
A new proposed authentication method called Fractal-Based Authentication Technique (FBAT) uses Sierpinski’s Triangle to authenticate users. This process combines recognition-based and cued recall-based authentication as the users have to recognize and click on their personal pre-selected color triangles as the level of triangles increases. For smartphones, the level of triangles is set at 3 due to the limited size of the touch screen, but it can increase for bigger tablets. At level 3, the probability that an attacker will guess the password is 0.13%. Recognition-based requires users to recognize pre-selected images and cued recall-based graphical requires users to click on pre-selected points on an image. In the Sierpinski triangle, a selected colored pattern is created during the registration and is hidden in the device. To authenticate themselves, a user must select the correct pattern in each level while the triangles randomly shuffle. Since the colored triangles are randomly generated, they can be found in different locations for every authentication, thus leaving smudges behind that do not give any clues to potential attackers. This technique can be used on Android devices, ATM machines, laptops, or any device that uses authentication to unlock.
2 x 2 and 1 x 2 Knock Code
Knock Code is authentication method introduced by LG Electronics that allows users to unlock a phone without turning it on by tapping the correct area in the right sequence. The screen is split into four sections, with the vertical and horizontal lines changing. There are two variations of Knock Code that have been proposed—the 2 x 2 and 1 x 2 knock code. These variations can protect against smudge attacks due to the sliding operations that erase the knocking at the end after the taps are inputted. In a user study that compared the original Knock Code and the Android Pattern Lock, these variation schemes were more resistance to smudge attacks.
2 x 2 knock code: The 2 x 2 knock code adds the sliding gesture which helps increase the amount of password combinations to about 4.5 billion ways or 53 thousand times bigger than the original Knock Code. This scheme uses four parts of the grid and aims to decrease the amount of gestures performed while still having a high level of security.
1 x 2 knock code: The 1 x 2 scheme also uses sliding operations but decreases the amount of areas to two that are side-to-side. Flexible area recognition, which is the algorithm used, doesn’t allow sliding operations in the same area for convenience, and the user only has to use their thumb to unlock the phone. The amount of passwords in the subspace is the exact same as the original Knock Code.
Future
There has been movement towards physiological biometric authentication in current smartphone security such as fingerprint and facial recognition that allow the user to replace their PINs and alphanumeric passcodes. However, even new and advanced authentication methods have flaws and weaknesses that users can take advantage of. For example, in an examination of touch authentication, researchers observed similar swiping behavior and finger pressure in a large number of phone users, and this generic information can aid attackers in performing successful attacks. Research on biometrics and multi-gesture authentication methods is continuing to help combat attacks on traditional passwords and eliminate the vulnerabilities of novel schemes as new trends and new technology are developed.
See also
Biometric Points
Keystroke dynamics
Lock screen
Password Strength
Mobile Security
Shoulder-surfing
Lipophobicity
References
Computer security exploits |
34658831 | https://en.wikipedia.org/wiki/Titan%20%28supercomputer%29 | Titan (supercomputer) | Titan or OLCF-3 was a supercomputer built by Cray at Oak Ridge National Laboratory for use in a variety of science projects. Titan was an upgrade of Jaguar, a previous supercomputer at Oak Ridge, that uses graphics processing units (GPUs) in addition to conventional central processing units (CPUs). Titan was the first such hybrid to perform over 10 petaFLOPS. The upgrade began in October 2011, commenced stability testing in October 2012 and it became available to researchers in early 2013. The initial cost of the upgrade was US$60 million, funded primarily by the United States Department of Energy.
Titan was eclipsed at Oak Ridge by Summit in 2019, which was built by IBM and features fewer nodes with much greater GPU capability per node as well as local per-node non-volatile caching of file data from the system's parallel file system.
Titan employed AMD Opteron CPUs in conjunction with Nvidia Tesla GPUs to improve energy efficiency while providing an order of magnitude increase in computational power over Jaguar. It used 18,688 CPUs paired with an equal number of GPUs to perform at a theoretical peak of 27 petaFLOPS; in the LINPACK benchmark used to rank supercomputers' speed, it performed at 17.59 petaFLOPS. This was enough to take first place in the November 2012 list by the TOP500 organization, but Tianhe-2 overtook it on the June 2013 list.
Titan was available for any scientific purpose; access depends on the importance of the project and its potential to exploit the hybrid architecture. Any selected programs must also be executable on other supercomputers to avoid sole dependence on Titan. Six vanguard programs were the first selected. They dealt mostly with molecular scale physics or climate models, while 25 others were queued behind them. The inclusion of GPUs compelled authors to alter their programs. The modifications typically increased the degree of parallelism, given that GPUs offer many more simultaneous threads than CPUs. The changes often yield greater performance even on CPU-only machines.
History
Plans to create a supercomputer capable of 20 petaFLOPS at the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory (ORNL) originated as far back as 2005, when Jaguar was built. Titan will itself be replaced by an approximately 200 petaFLOPS system in 2016 as part of ORNL's plan to operate an exascale (1000 petaFLOPS to 1 exaFLOPS) machine by 2020. The initial plan to build a new 15,000 square meter (160,000 ft2) building for Titan, was discarded in favor of using Jaguar's existing infrastructure. The precise system architecture was not finalized until 2010, although a deal with Nvidia to supply the GPUs was signed in 2009. Titan was first announced at the private ACM/IEEE Supercomputing Conference (SC10) on November 16, 2010, and was publicly announced on October 11, 2011, as the first phase of the Titan upgrade began.
Jaguar had received various upgrades since its creation. It began with the Cray XT3 platform that yielded 25 teraFLOPS. By 2008, Jaguar had been expanded with more cabinets and upgraded to the XT4 platform, reaching 263 teraFLOPS. In 2009, it was upgraded to the XT5 platform, hitting 1.4 petaFLOPS. Its final upgrades brought Jaguar to 1.76 petaFLOPS.
Titan was funded primarily by the US Department of Energy through ORNL. Funding was sufficient to purchase the CPUs but not all of the GPUs so the National Oceanic and Atmospheric Administration agreed to fund the remaining nodes in return for computing time. ORNL scientific computing chief Jeff Nichols noted that Titan cost approximately $60 million upfront, of which the NOAA contribution was less than $10 million, but precise figures were covered by non-disclosure agreements. The full term of the contract with Cray included $97 million, excluding potential upgrades.
The yearlong conversion began October 9, 2011. Between October and December, 96 of Jaguar's 200 cabinets, each containing 24 XT5 blades (two 6-core CPUs per node, four nodes per blade), were upgraded to XK7 blade (one 16-core CPU per node, four nodes per blade) while the remainder of the machine remained in use. In December, computation was moved to the 96 XK7 cabinets while the remaining 104 cabinets were upgraded to XK7 blades. ORNL's external ESnet connection was upgraded from 10 Gbit/s to 100 Gbit/s and the system interconnect (the network over which CPUs communicate with each other) was updated. The Seastar design used in Jaguar was upgraded to the Gemini interconnect used in Titan which connects the nodes into a direct 3D torus interconnect network. Gemini uses wormhole flow control internally. The system memory was doubled to 584 TiB. 960 of the XK7 nodes (10 cabinets) were fitted with a Fermi based GPU as Kepler GPUs were not then available; these 960 nodes were referred to as TitanDev and used to test code. This first phase of the upgrade increased the peak performance of Jaguar to 3.3 petaFLOPS. Beginning on September 13, 2012, Nvidia K20X GPUs were fitted to all of Jaguar's XK7 compute blades, including the 960 TitanDev nodes. In October, the task was completed and the computer was finally renamed Titan.
In March 2013, Nvidia launched the GTX Titan, a consumer graphics card that uses the same GPU die as the K20X GPUs in Titan. Titan underwent acceptance testing in early 2013 but only completed 92% of the tests, short of the required 95%. The problem was discovered to be excess gold in the female edge connectors of the motherboards' PCIe slots causing cracks in the motherboards' solder. The cost of repair was borne by Cray and between 12 and 16 cabinets were repaired each week. Throughout the repairs users were given access to the available CPUs. On March 11, they gained access to 8,972 GPUs. ORNL announced on April 8 that the repairs were complete and acceptance test completion was announced on June 11, 2013.
Titan's hardware has a theoretical peak performance of 27 petaFLOPS with "perfect" software. On November 12, 2012, the TOP500 organization that ranks the world's supercomputers by LINPACK performance, ranked Titan first at 17.59 petaFLOPS, displacing IBM Sequoia. Titan also ranked third on the Green500, the same 500 supercomputers ranked in terms of energy efficiency. In the June 2013 TOP500 ranking, Titan fell to second place behind Tianhe-2 and to twenty-ninth on the Green500 list. Titan did not re-test for the June 2013 ranking, because it would still have ranked second, at 27 petaFLOPS.
Hardware
Titan uses Jaguar's 200 cabinets, covering 404 square meters (4,352 ft2), with replaced internals and upgraded networking. Reusing Jaguar's power and cooling systems saved approximately $20 million. Power is provided to each cabinet at three-phase 480 V. This requires thinner cables than the US standard 208 V, saving $1 million in copper. At its peak, Titan draws 8.2 MW, 1.2 MW more than Jaguar, but runs almost ten times as fast in terms of floating point calculations. In the event of a power failure, carbon fiber flywheel power storage can keep the networking and storage infrastructure running for up to 16 seconds. After 2 seconds without power, diesel generators fire up, taking approximately 7 seconds to reach full power. They can provide power indefinitely. The generators are designed only to keep the networking and storage components powered so that a reboot is much quicker; the generators are not capable of powering the processing infrastructure.
Titan has 18,688 nodes (4 nodes per blade, 24 blades per cabinet), each containing a 16-core AMD Opteron 6274 CPU with 32 GB of DDR3 ECC memory and an Nvidia Tesla K20X GPU with 6 GB GDDR5 ECC memory. There are a total of 299,008 processor cores, and a total of 693.6 TiB of CPU and GPU RAM.
Initially, Titan used Jaguar's 10 PB of Lustre storage with a transfer speed of 240 GB/s, but in April 2013, the storage was upgraded to 40 PB with a transfer rate of 1.4 TB/s. GPUs were selected for their vastly higher parallel processing efficiency over CPUs. Although the GPUs have a slower clock speed than the CPUs, each GPU contains 2,688 CUDA cores at 732 MHz, resulting in a faster overall system. Consequently, the CPUs' cores are used to allocate tasks to the GPUs rather than directly processing the data as in conventional supercomputers.
Titan runs the Cray Linux Environment, a full version of Linux on the login nodes that users directly access, but a smaller, more efficient version on the compute nodes.
Titan's components are air-cooled by heat sinks, but the air is chilled before being pumped through the cabinets. Fan noise is so loud that hearing protection is required for people spending more than 15 minutes in the machine room. The system has a cooling capacity of 23.2 MW (6600 tons) and works by chilling water to 5.5 °C (42 °F), which in turn cools recirculated air.
Researchers also have access to EVEREST (Exploratory Visualization Environment for Research and Technology) to better understand the data that Titan outputs. EVEREST is a visualization room with a 10 by 3 meter (33 by 10 ft) screen and a smaller, secondary screen. The screens are 37 and 33 megapixels respectively with stereoscopic 3D capability.
Projects
In 2009, the Oak Ridge Leadership Computing Facility that manages Titan narrowed the fifty applications for first use of the supercomputer down to six "vanguard" codes chosen for the importance of the research and for their ability to fully utilize the system. The six vanguard projects to use Titan were:
S3D, a project that models the molecular physics of combustion, aims to improve the efficiency of diesel and biofuel engines. In 2009, using Jaguar, it produced the first fully resolved simulation of autoigniting hydrocarbon flames relevant to the efficiency of direct injection diesel engines.
WL-LSMS simulates the interactions between electrons and atoms in magnetic materials at temperatures other than absolute zero. An earlier version of the code was the first to perform at greater than one petaFLOPS on Jaguar.
Denovo simulates nuclear reactions with the aim of improving the efficiency and reducing the waste of nuclear reactors. The performance of Denovo on conventional CPU-based machines doubled after the tweaks for Titan and it performs 3.5 times faster on Titan than it did on Jaguar.
Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a molecular dynamics code that simulates particles across a range of scales, from quantum to relativistic, to improve materials science with potential applications in semiconductor, biomolecule and polymer development.
CAM-SE is a combination of two codes: Community Atmosphere Model, a global atmosphere model, and High Order Method Modeling Environment, a code that solves fluid and thermodynamic equations. CAM-SE will allow greater accuracy in climate simulations.
Non-Equilibrium Radiation Diffusion (NRDF) plots non-charged particles through supernovae with potential applications in laser fusion, fluid dynamics, medical imaging, nuclear reactors, energy storage and combustion. Its Chimera code uses hundreds of partial differential equations to track the energy, angle, angle of scatter and type of each neutrino modeled in a star going supernova, resulting in millions of individual equations. The code was named Chimera after the mythological creature because it has three "heads": the first simulates the hydrodynamics of stellar material, the second simulates radiation transport and the third simulates nuclear burning.
Bonsai is a gravitational tree code for n-body simulation. It has been used for the 2014 Gordon Bell prize nomination for simulating the Milky Way Galaxy on a star by star basis, with 200 billion stars. In this application the computer reached a sustained speed of 24.773 petaFlops.
VERA is a light-water reactor simulation written at the Consortium for Advanced Simulation of Light Water Reactors (CASL) on Jaguar. VERA allows engineers to monitor the performance and status of any part of a reactor core throughout the lifetime of the reactor to identify points of interest. Although not one of the first six projects, VERA was planned to run on Titan after optimization with assistance from CAAR and testing on TitanDev. Computer scientist Tom Evans found that the adaption to Titan's hybrid architecture was more difficult than to previous CPU-based supercomputers. He aimed to simulate an entire reactor fuel cycle, an eighteen to thirty-six month-long process, in one week on Titan.
In 2013 thirty-one codes were planned to run on Titan, typically four or five at any one time.
Code modifications
The code of many projects has to be modified to suit the GPU processing of Titan, but each code is required to be executable on CPU-based systems so that projects do not become solely dependent on Titan. OLCF formed the Center for Accelerated Application Readiness (CAAR) to aid with the adaptation process. It holds developer workshops at Nvidia headquarters to educate users about the architecture, compilers and applications on Titan. CAAR has been working on compilers with Nvidia and code vendors to integrate directives for GPUs into their programming languages. Researchers can thus express parallelism in their code with their existing programming language, typically Fortran, C or C++, and the compiler can express it to the GPUs. Dr. Bronson Messer, a computational astrophysicist, said of the task: "an application using Titan to the utmost must also find a way to keep the GPU busy, remembering all the while that the GPU is fast, but less flexible than the CPU." Moab Cluster Suite is used to prioritize jobs to nodes to keep utilization high; it improved efficiency from 70% to approximately 95% in the tested software. Some projects found that the changes increased efficiency of their code on non-GPU machines; the performance of Denovo doubled on CPU-based machines.
The amount of code alteration required to run on the GPUs varies by project. According to Dr. Messer of NRDF, only a small percentage of his code runs on GPUs because the calculations are relatively simple but processed repeatedly and in parallel. NRDF is written in CUDA Fortran, a version of Fortran with CUDA extensions for the GPUs. Chimera's third "head" was the first to run on the GPUs as the nuclear burning could most easily be simulated by GPU architecture. Other aspects of the code were planned to be modified in time. On Jaguar, the project modeled 14 or 15 nuclear species but Messer anticipated simulating up to 200 species, allowing far greater precision when comparing the simulation to empirical observation.
See also
Jaguar (supercomputer) – OLCF-2
Summit (supercomputer) – OLCF-4
Oak Ridge Leadership Computing Facility
References
External links
Cray
GPGPU supercomputers
Nvidia
Oak Ridge National Laboratory
One-of-a-kind computers
Petascale computers
X86 supercomputers
64-bit computers |
24422075 | https://en.wikipedia.org/wiki/AnyDoc%20Software | AnyDoc Software | AnyDoc Software, founded in 1989 as Microsystems Technology, Inc., was a company based in Tampa, Florida that developed, sold, installed, and supported enterprise content management (ECM) software which captures data from scanned documents or images into machine-readable text (and images) for back-office applications and content/document management systems. The company’s flagship product, OCR for Forms (which was later renamed OCR for AnyDoc) debuted in 1991 after two years of product research and development. AnyDoc Software was purchased in 2013 by Hyland Software, which is best known for its document management and content services software, OnBase. AnyDoc users can find more information about their products on the AnyDoc Community Page.
AnyDoc developed technologies to process structured, semi-structured, and unstructured (free-form) documents, as well as classification, and workflow. Structured documents, where data appears in the same location on each form (such as a credit application or order form), use template-based technology. A template in essence, is a map telling the software where the data is located on the document and how to process that data. While template-based data capture is still widely used to eliminate the manual data entry previously required to process structured documents, it is not a feasible solution to efficiently process semi-structured documents, such as invoices, remittances, and checks. AnyDoc developed AnyApp technology to capture the data for these more complex documents and to memorize the data locations for subsequent encounters with the same document types for expedited processing.
History
1989: Company founded.
1991: Flagship product, OCR for Forms (now known as OCR for AnyDoc) introduced.
1999: European headquarters opened in Zug, Switzerland.
2001: Semi-structured forms processing technology AnyApp introduced.
2003: Company name changed to AnyDoc Software and rebranding of products.
2006: Opened UK satellite office in Hampshire, United Kingdom.
2007: Introduction of capture workflow product, Infiniworx to auto-classify documents in a company’s workflow.
2008: Opened German office in Wiesbaden, Germany.
2013: Acquired by Hyland Software, Inc.
References
External links
Software companies based in Florida
Companies based in Tampa, Florida
1989 establishments in Florida
Software companies of the United States |
1043712 | https://en.wikipedia.org/wiki/Capgemini%20Engineering | Capgemini Engineering | Capgemini Engineering (Previously known as Altran Technologies, SA) is a global innovation and engineering consulting firm founded in 1982 in France by Alexis Kniazeff and Hubert Martigny. Altran operates primarily in high technology and innovation consultancy, which account for nearly 75% of its turnover. Administrative and information consultancy accounts for 20% of its turnover with strategy and management consulting making up the rest. The firm is active in most engineering domains, particularly electronics and IT technology. In 2018, Altran generated €2.916 billion in revenues and employed over 46,693 people around the world. Since 18 June 2015, Altran has been led by CEO Dominique Cerutti. Altran was acquired by Capgemini in 2019. On 8 April 2021, they have renamed the organization as "Capgemini Engineering", due to the merger with Capgemini's Engineering and R&D services.
History
1980s
In 1982, Alexis Kniazeff and Hubert Martigny, ex-consultants of Peat Marwick (today known as KPMG), founded CGS Informatique, which would later become Altran. By 1985, the firm counted a staff of 50 engineers.
The company expanded through small business units that would later generally range from 10 to 200 employees. Business units operated semi-independently and were given the autonomy to choose their own growth strategy and investment programs while still getting assistance from central management. This allowed business units to give each other support and share ideas. Managers’ compensation was decided based on the units’ performance.
One of Altran's first major projects was developing the on-board communications network in 1987 for France's high-speed TGV trains that allowed French lines to be connected to other European rail lines.
In 1987, the company was listed on the Secondary Market of the Paris Stock Exchange. By 1989, Altran's sales had neared the equivalent of 48 million euros. That same year, Altran bought Ségur Informatique, an aeronautics simulation and modeling company. The number of the company's employees grew to approximately 1,000 by 1990, as well as its range of expertise, moving into the transportation, telecommunications, and energy sectors, with a strong information technology component.
1990s
In the early 1990s the company adopted a new business model. While much of the company's work during the previous decade had been performed in-house, at the beginning of the 1990s the company developed a new operational concept, that of a temp agency for the high-technology sector. The firm's staff started to work directly with its clients' projects, adding their specialized expertise to projects. By the end of the decade, the company had more than 50 subsidiaries in France, and had taken the lead of that market's technology consulting sector. The company was helped by the long-lasting recession affecting France and much of Europe at the beginning of the decade, as companies began outsourcing parts of their research and development operations. Altran was also expanding by acquisition, buying up a number of similar consultancies in France, such as the 1992 acquisition of GERPI, based in Rennes. By the end of that year, Altran's revenues had reached 76.5 million euros.
With the elimination of border controls within the European Community in 1992, the company's clients began operations in other European countries. At first Altran turned to foreign partnerships in order to accommodate its clients. Yet this approach quickly proved unsatisfactory, and Altran put into place an aggressive acquisition plan in order to establish its own foreign operations.
Altran targeted the Benelux countries, the first to lower their trade barriers, acquiring a Belgian company in 1992. By the end of the decade, the firm's network in these countries' markets was composed of 12 companies and 1,000 consultants. When an acquisition took place, Altran kept on existing management and in general the acquired firms retained their names. The acquisition policy was based on paying an initial fee for an acquisition, then on subsequent annual payments based on the acquired unit's performance.
In 1992, Altran created Altran Conseil to work in the automobile equipment, nuclear and consumer electronic industries.
Altran's Spanish operations began with the acquisition in 1993 of SDB Espan, a leading telecommunications consultant in that country, and later grew into a group of nine companies with more than 2,000 consultants. Spain remained one of the company's top three markets into the new century, with a total of six companies operating under Altran, including new acquisitions Norma, STE, Inser, and Strategy Consultors.
By 1995, Altran's sales had topped 155 million euros, and its total number of employees had grown to nearly 2,400 (mostly engineers). The company recognized that the majority of engineers lacked a background in management, thus a training program called IMA (Institut pour le management Altran) was launched capable of training 200 candidates per year.
In 1995 the company invested in the United Kingdom and acquired High Integrity Systems, a consulting firm focused on assisting companies that were transitioning into new-generation computer and network systems, and DCE Consultants, which operated from offices in Oxford and Manchester.
In 1997, Altran also acquired Praxis Critical Systems, founded in Bath in 1983 to provide software and safety-engineering services. In order to supplement the activities of its acquisitions, the company also opened new subsidiary offices, such as Altran Technologies UK, a multi-disciplinary and cross-industry engineering consultancy.
In the second half of the 1990s the company was acquiring an average of 15 companies per year. Italy became a target for growth in 1996, when Altran established subsidiary Altran Italy, before making its first acquisition in that country in 1997.
In 1998, Altran added four new Italian acquisitions, EKAR, RSI Sistemi, CCS and Pool. In 1999, the company added an office in Turin as well as two new companies, ASP and O&I.
Germany was also a primary target for Altran during this period, starting with the 1997 establishment of Altran Technologies GmbH and the acquisition of Europspace Technische Entwicklungen, a company that had been formed in 1993 and specialized in aeronautics. In 1998, the company added consulting group Berata and, the following year, Askon Consulting joined the group, which then expanded with a second component, Askon Beratung.
Other European countries joined the Altran network in the late 1990s as well, including Portugal and Luxembourg in 1998 and Austria in 1999. In 1998, Altran deployed a telecommunications network in Portugal. By the end of 1999, the company's sales had climbed to EUR 614 million; significantly, international sales already accounted for more than one-third of the company's total revenues.
Similar progress was made in Switzerland, a market Altran entered in 1997 with the purchase of D1B2. The Berate Germany purchase brought Altran that company's Swiss office as well in 1998; that same year, Altran launched its own Swiss startup, Altran Technologies Switzerland. In 1999, the company added three new Swiss companies, , Innovatica, and Cerri.
Significant projects during the decade included the design of the Météor autopilot system for the first automated subway line for the Paris Metro (Line 14) and the attitude control system for the European Space Agency's Ariane 5 rocket.
Early 21st century
In 2000, the company's Italian branch expanded to 10 subsidiaries with the opening of offices in Lombardy and Lazio and the acquisition of CEDATI. Also in 2000, Altran's presence in Switzerland grew with two new subsidiaries (Infolearn and De Simone & Osswald). In Germany, Altran acquired I&K Beratung. The United States became a primary target for the company's expansion with the acquisition of a company that was renamed Altran Corporation.
Altran began building its operations in South America as well, especially in Brazil. By the end of 2001, Altran's revenues had jumped to more than 1.2 billion euros, while its ranks of consultants now topped 15,000.
Altran become involved in a couple of new PR initiatives at the beginning of the decade, including a partnership with the Renault F1 racing team and a commitment to the Solar Impulse project with the goal of circumnavigating the Earth powered by only solar power.
In 2002, Askon Beratung was spun off from Askon consulting as a separate, independently operating company within Altran, and the company's Swiss network had added a new component with the purchase of Sigma. This year a full-scale entry into the United States was made. After providing $56 million to back a management buyout of the European, Asian, and Latin American operations of bankrupt Arthur D. Little (the US-based consulting firm founded in 1886), Altran itself acquired the Arthur D. Little brand and trademark. This acquisition was seen as an important step in achieving the company's next growth target. Sales grew to 2 billion euros by 2003 and the company had more than 40,000 engineers by 2005.
In 2004, Altran established operations in Asia and created Altran Pr[i]me, a consulting outfit specialized in large-scale innovation projects.
On 29 December 2006, all subsidiaries based in Ile de France were merged under the name of Altran Technologies SA, a technology consultant, which was organized into four business lines (as well as brand names):
Altran TEM: Telecommunications, Electronics and Multimedia.
Altran AIT: Automobiles, Infrastructure and Transportation.
Altran Eilis: Energy, Industry and Life Science.
Altran ASD: Aeronautics, Space and Defence.
In 2009, Altran launched its Altran Research program to reinforce its position as a leading innovation consultancy. The program is centered around three main themes: designing tools that can guarantee long-lasting solutions, innovative research and proof-of-concepts, and research on how to organize and improve innovative practices.
In 2012, as part its Performance Plan 2012, PSA Peugeot Citroën chose Altran as its strategic partner.
In early 2013, Altran group finalised the acquisition of 100% of IndustrieHansa, an engineering and consulting group based in Germany, placing it among the top five in the market of Technical Consultancy, Innovation, Research and Development.
In June 2015, Altran and General Electric (GE) announced a new agreement to co-develop the next generation of industrial Internet solutions that will allow companies to take advantage of the Internet of Things and big data to optimize the management of their employees and processes.
Altran continues to acquire innovation consultancies in other countries as part of its expansion strategy. In February 2015, it acquired Nspyre, a Dutch leader in R&D and high-technology. In July 2015, it bought SiConTech, an Indian engineering company specializing in semiconductors.
Altran's revenues reached €1.945 billion in 2015. It currently has over 25,000 employees operating in over 20 countries.
In November 2015, Dominique Cerutti announced his five-year strategic plan, "Altran 2020. Ignition." The plan aims for the firm to reach 3 billion euros in revenue in five years and a big increase in profitability.
In December 2015, Altran announced the acquisition of Tessella, an international leader in analytical and data science consulting.
In 2016, the company acquired two other American companies: Synapse, specializing in the development of innovative products, and Lohika, a software engineering firm. This transatlantic expansion is one of the principal approaches to development supported by Altran in the Ignition 2020 strategic plan.
Additionally, Altran announced in October 2016 the acquisition of two automobile industry companies: Swell, an engineering services and research and development firm based in the Czech Republic, as well as Benteler Engineering, a German firm specializing in conception and engineering services. Dominique Cerutti is noted for establishing several strategic partnerships, notably with Divergent, an American holding that integrates 3D printing in the automobile production process, and the Chinese digital mapping holding EMG ().
22 December 2016 Acquisition: Altran acquires Pricol Technologies, an India-based engineering solutions.
In July and September 2017, Altran finalized two acquisitions: Information Risk Management, and GlobalEdge. The acquisition of IRM enabled Altran to enhance its presence and offers in the domain of cyber security. The buying of GlobalEdge, an Indian software product engineering firm aim at helping Altran to develop its presence in India as well as in the US, where Global Edge has an office in California.
In November 2017, the company also acquired Aricent, a global digital design and engineering company whose headquarters are in Santa Clara, California. The $2.0 billion transaction enables the company to become the global leader in engineering and R&D services, completing its "Altran 2020. Ignition" strategic plan as early as 2018. The acquisition was completed on 22 March 2018, bringing the overall turnover of the new structure close to €3 billion.
On 28 June 2018, Altran announces the plan "The High Road, Altran 2022". This plan ambitions a 14.5% margin and a 4 billion euros turnover in 2022 by betting on technological breakthroughs.
Takeover by Capgemini
On 1 April 2020, Capgemini's friendly takeover bid for Altran is finalized. Capgemini announced that it has reached the squeeze-out threshold of 90% of Altran's capital, which is not longer be an independent company and has been delisted from stock markets on 15 April 2020. For Dominique Cerutti, Chairman and CEO of Altran, the takeover "will give birth to the world leader in ‘Intelligent Industry’ to champion the digital transformation of companies".
Organization and activities
Altran Technologies is active in innovation and advanced engineering consulting. The company covers the entire project life-cycle, from the planning stages (technological monitoring, technical feasibility studies, strategy planning, etc.) to final realization (design, implementation, testing, etc.).
As of 2008, Altran was organized in roughly 200 branches, each autonomous in their management and commercial strategy. The firm's main business areas are as follows:
Aerospace and Defense
Automotive and Transportation
Energy and Industry
Financial Services
Government
Life sciences
Media
Railway
Telecom
The Altran group is active worldwide with 23 country divisions.
Revenue breakdown by sector:
Technology and research & development consulting (68.5%)
Organizational and information technology consulting (31.5%)
Geographical breakdown of revenues: France (43.3%), Europe (51.6%) and other (5.1%).
Software Frameworks
Automotive frameworks
Cloud & edge computing
Intelligent automation
Internet & embedded system
Networking
Security software
Worldwide presences
Altran is headquartered on the avenue Charles de Gaulle in Neuilly-sur-Seine, France. The group is present in Belgium, Brazil, Canada, China, Colombia, Germany, Spain, Ukraine, France, Italy, India, Luxembourg, Malaysia, Mexico, Tunisia, Morocco, the Netherlands, Norway, Austria, Portugal, Romania, Sweden, Switzerland, the Middle East, the United Kingdom and the United States.
Corporate governance
As of February 2018, Altran's executive leadership was as follows:
Dominique Cerutti: Chairman and CEO
Cyril Roger: Senior Executive Vice-President for Europe and Delegate Director
Albin Jacquemont: Executive Vice-President, Chief Financial Officer
Pascal Brier: Executive Vice-President for Strategy, Innovation and Solutions
Daniel Chaffraix: Executive Vice-President in charge of Transformation and Executive Vice-President North America and India
Luis Abad: CEO Spain
Marcel Patrignani: CEO Italy
William Rozé: CEO France
Research and innovation
Altran Research
Altran Research, headed by Fabrice Mariaud, is Altran's internal R&D department in France. Scientific experts, each without their domain of expertise, plan and put in place research and innovation projects in collaboration with Altran Lab, academic partners and industrial actors. Current research areas include e-health, space & aeronautics, energy, complex systems, transportation and mobility, industry, and the services of the future.
Altran Lab
Altran Lab is made up of an incubator, an innovation hub and Altran Pr[i]me, created in 2004 and focused on innovation management.
Altran Foundation for Innovation
The Altran Foundation for Innovation is an international scientific competition run by the company.
The competition's theme is selected each year addressing a major issue in society. The entries are judged by a panel containing scientific, political or academic experts. A prize of a year's technological support for the project is awarded to the winner and Altran's consultant teams will also follow up the awarded project.
Pro bono work
Altran France does pro bono work in areas relating to culture, civic engagement and innovation. In particular, Altran aids the Musée des Arts et Métiers of Paris, the Quai Branly Museum and the Arab World Institute with their digital strategy and management of their digital cultural assets.
Financial data
Altran first appeared on the Paris stock market on 20 October 1987.
Stock valued on the Paris stock market (Euronext)
Member of the CAC All Shares index
ISIN Code: FR0000034639
Number of outstanding shares as of 30 October 2015: 175,536,188
Market capitalization as of 10 April 2019: 2.5 billion euros
Primary stockholders as of 10 April 2019:
Altrafin Participations: 8.4%
Alexis Kniazeff: 1.4%
Hubert Martigny: 1.4%
Financial data table
See also
List of IT consulting firms
Frog Design Inc.
Tessella
Cambridge Consultants
Capgemini
References
External links
Consulting firms established in 1982
Engineering companies of France
Engineering consulting firms
International information technology consulting firms
Management consulting firms
International management consulting firms
Companies based in Paris
French companies established in 1982
Technology companies established in 1982
Companies formerly listed on the Paris Bourse
2020 mergers and acquisitions |
13018092 | https://en.wikipedia.org/wiki/University%20of%20La%20Salette | University of La Salette | The University of La Salette is a private Catholic, coeducational basic and higher education institution run by the Missionaries of Our Lady of La Salette in Santiago City, Philippines. It was founded by the La Salettes in June, 1951. It is one of the top performing universities in the region.
The Missionaries of Our Lady of La Salette, mandated by Msgr. Constancio Jurgens, Bishop of Tuguegarao, Cagayan, Philippines, to establish an educational complex as a response to the urgent need of the people in Isabela, opened La Salette of Santiago, a high school in 1950. In consonance with the growing need of the people for higher education, La Salette of Santiago moved up to College level with the assistance of the Maryknoll Sisters. In 1957, Rev. Fr. Jose R. Nacu, M.S., the first Filipino Missionary of Our Lady of La Salette, ordained priest in Fall River, Massachusetts, became the first Filipino rector of La Salette of Santiago College. Eventually, in 1998, La Salette of Santiago was raised to a university level under the leadership of Rev. Fr. Romeo Gonzales, M.S., Ph.D.
Known for its charism Reconciliation (Reconciliare), the main compus of the University of La Salette is located in Santiago City, with an extension campus in Silang, Cavite, Philippines. The span of its operation covers the vast southern Isabela province with extension offices in various towns for summer courses.
At present, the University Hospital, a 350-bed capacity tertiary hospital with a budget of 650 million pesos is under construction to cater to the growing demands in healthcare of the region.
History
The University of La Salette was established in June 1951 as a high school, and initially offered two courses: Secretarial Science and Arts. In March 1953, the school held its first graduation in Secretarial Science. Due to a lack of qualified instructors, the Bachelor of Science in Education course was temporarily suspended, but was re-opened in 1963–1964, and Bachelor of Science in Elementary Education was also offered that same year.
During the 1968–69 school years, La Salette of Santiago, Inc. gained membership in the Catholic Educational Association of the Philippines (CEAP). In 1970, the college department was moved from the high school campus to its present site to cope with the unprecedented increase in college enrollment. In 1972, the Ministry of Education Culture and Sports (MECS) granted government recognition for the Degree of Bachelor of Science in business administration.
In 1974, during the last year of management by the Maryknoll Sisters at La Salette of Santiago, Inc., the college was granted accreditation by the Philippine Accrediting Association of Schools Colleges and Universities (PAASCU), making it the first PAASCU accredited school in the Province of Isabela, second in Cagayan Valley and 19th in the Philippines.
Towards the end of 1974, the Religious of the Assumption took over the administration of La Salette College and the high school, and the La Salette Panangutan Center was created in response to the challenge of Christian service particularly the poor in the locality. The university also underwent renovations; the library was transferred to a new location, more buildings were constructed for new facilities, a proper accounting system was installed for the Business Office, and the faculty and Staff Development Program was strengthened and research and outreach activities were undertaken.
In 1979, The Daughters of Charity were invited to help in the school's management. Sometime between 1982 and 1987, the University of La Salette partnered with Computer Exponents Inc. to introduce Computer Education, and two years later the university assumed full responsibility over the program after MECS approved the integration of Computer Education in all courses of the college.
The Child Learning Center was opened in 1983, and served as a training center for the Bachelor in Elementary Education (BEE) student interns. DECS granted government recognition of the Preschool and Basic Elementary Education programs two years later.
Cognizant of the professional needs of the teachers in the La Salette School System as well as in other schools in the region, the Graduate School opened in 1984 with Master of Arts in Development Education. In the same school year, a five-year Civil Engineering course was also offered in the undergraduate level.
The High School Department, likewise, continued to update and upgrade its standard. It embarked on the rigorous process of self-assessment which resulted in the first formal survey of Philippine Accrediting Association of Schools Colleges and Universities (PAASCU) in 1984. The high school was granted its initial accreditation for a period of three years on March 22, 1985, making it the first accredited high school in Region 2. That same year, on June 17, 1985, the degree of Master of Arts in Development Education was granted government recognition.
However, a major tragedy which affected the whole institution struck on March 25, 1986. A fire of undetermined origin razed to the ground the main building of the High School Department. The fire destroyed 19 classrooms, 5 administrative offices, a large faculty room, and a storeroom where audio-visual equipment, textbooks, industrial arts tools and machines and other school supplies, were kept. However, believing in maintaining and supporting the high standard of instruction which the school has committed to continue, the local community, through the leadership of the Home-School Association and Alumni Association worked extensively with the school administration in building new classrooms.
February 13, 1987, marked the promotion of La Salette College from Level II to Level III Accredited status by both PAASCU and FAAP, an honor that made La Salette College the first Level III accredited school in Region 02. The four-year Bachelor of Science in Secretarial Administration was also granted government recognition on July 27, 1987. In school year 1986–87, the president was appointed by Fund for Assistance to Private Education (FAPE) to take Management of FAPE-funded government projects such as the Secondary Education Development Program (SEDP), the Educational Service Contracting (ESC) and the Tuition Fee Supplement for all private High Schools in the region.
Despite limited funds, the administrators started the construction of the multi-purpose building named Our Lady of the Miraculous Medal Building which at present houses the following: Chapel, Library, Audio-Visual Room, Offices of the Deans of Education, Liberal Arts, and Engineering, the Central Supply Room, Demonstration Room, the Model Clinic for the Nursing and Midwifery departments, the Drafting Room, the Hydraulic and Physics Laboratories, the Lawrence Conference Hall, a Dormitory and classrooms. Today, the extension of the Our Lady of the Miraculous Medal Building houses more classrooms for the growing population, the Graduate School Library, the Physical therapy Laboratories, and the centralized Laboratories in Biology, Chemistry, and Zoology. The college also acquired an additional lot of two hectares to anticipate future expansion programs as indicated in the five-year plan.
In school year 1987–1988, there was a need for the creation of the position of a vice-president to assist the president in administration. To manifest its commitment to sustain quality Christian education, the College Department renewed its accreditation status as Level III by PAASCU and FAAP in 1989. In the same year, the college opened the Criminology Course to respond to the development needs of the region in terms of peace and order which is a very crucial component of rural development. The following year, the degree on Master in Business Management was added to the Graduate School Program.
Similarly, to meet the demand for adequate health services and technology in the local community and in the region, the Midwifery course was offered in 1992, followed by the Bachelor of Science in Nursing in 1993, Bachelor of Science in Geodetic Engineering, Master in Public Administration and Doctor of Philosophy (Educational Management) in 1994. The school year 1993–1994 marked the second PAASCU re-accreditation of the High School Department for a period of five years thereby raising its accreditation level to Level II.
In July 1994, the College President Fr. Romeo B. Gonzales, MS, PhD, filed the application for the conversion of the college into a university. The college in its desire to be of greater service to other schools in the region, developed and opened its physical facilities for provincial and regional conferences/seminar-workshops of schools and government agencies. This year was also the height of the strong leadership and involvement of the college in various educational activities in the region.
In school year 1995–96, curricular expansion was made with the opening of the Bachelor of Science in Physical Therapy (BSPT), the Bachelor of Science in Computer Information System (BSCIS), Bachelor of Science in Psychology, Bachelor of Science in Mathematics and the integration of computer in all courses. Adjacent lots at 1.5 hectares were purchased for expansion.
Constant follow-up was made by the administration with regards to its application for university status. During the first semester of school year 1996–1997, a team from the Commission on Higher Education (CHED) Regional Office was sent to assess the capability and qualification of the college to become a university. Sometime in February 1997, a team from the CHED National Office was organized and sent to La Salette College to make follow-up assessment and to make recommendations to the commissioners on the status of La Salette College for becoming a university. The Honorable Commissioner Mona D. Valisno, Managing Commissioner and Oversight Commissioner for Luzon, was invited by La Salette College as the Commencement Speaker in March 1997. This provided the time and venue for the commissioner to see for herself the curricular programs, the extension services and the research services of La Salette College.
In school year 1997–1998, two big computer laboratories with Local Area Network (LAN) were provided to keep up with the development of Information Technology. Expansion of the Our Lady of Miraculous Medal building was constructed to provide more classrooms for the growing population.
Series of visits of the three CHED Commissioners were made in November 1997. It was hoped that the University Charter will be awarded in January 1998 during the Golden Jubilee Celebration of the Presence of the Missionaries of Our Lady of La Salette in the Philippines (1948–1998). However, the administration was required to comply with three recommendations: the improvement of the facade, the Internet connection and the development and production of more research studies. The administration exerted effort to respond to these demands. Before the start of school year 1997–98, the administration signed the Memorandum of Agreement with Fund for Assistance to Private Education (FAPENet) for internet connection and services.
In February 1998, the College of Nursing and College of Engineering went through preliminary survey for accreditation by PAASCU. The long-awaited dream of becoming a UNIVERSITY was realized on June 25, 1998. Formal inauguration and awarding of the University Charter was held. Fr. Romeo B. Gonzales, MS, Ph.D. who has served the college since 1979 to the present was installed as the first University President.
The challenge of being a university continues to inspire the administration to update and expand the curricular programs. It has established several Graduate School extension classes in the Provinces of Cagayan and Isabela through its Center for Alternative Learning in order to respond to the call for borderless education and community service. The University of La Salette was one of the universities in Region 02 which was deputized by the Commission on Higher Education (CHED) Manila to implement the Expanded Tertiary Education Equivalency and Accreditation Program (ETEEAP). The Memorandum of Agreement was signed by the University President and Commissioner Mona D. Valisno on May 19, 1999.
Today, University of La Salette, with its population close to four thousand, stands with pride in serving the youth of Santiago City, Province of Isabela and the entire Cagayan Valley. In pursuit of academic excellence, Christian formation, leadership and service, University of La Salette continues to offer well-rounded education that provides an opportunity for self-realization and actualization. Each one is called to continue to live by heart the message of Our Lady of La Salette for conversion, prayer and zeal and to make her message known to all people.
Academic Programs
Graduate Programs
Doctor in Business Administration
Doctor in Public Administration
Doctor of Philosophy in Educational Management
Doctor of Philosophy in Education
Major in Science Education
Master in Business Management
Master of Arts in Education
Major in: English, Educational Management, Filipino, Mathematics, Guidance and Counseling, Science, Physical Education, Peace and Reconciliation Studies
Master of Arts in Nursing
Major in: Nursing Service Administration
Master of Science in Nursing
Major in: Community Health Nursing, Medical-Surgical Nursing, and Maternal & Child Nursing
Master of Arts in Criminology
Master of Science in Engineering Management
Master of Science in Library and Information Science
Master of Science in Public Health
Master of Science in Social Work
Master of Information Technology
Undergraduate Programs
College of Accountancy
Bachelor of Science in Accountancy
Bachelor of Science in Accounting Information System
College of Arts and Sciences
Bachelor of Arts in Political Science
Bachelor of Arts in Journalism
Bachelor of Arts in Philosophy
Bachelor of Science in Psychology
Bachelor of Science in Social Work
College of Business Education
Bachelor of Science in Business Administration
Major in: Human Resources Management, Financial Management, Marketing Management
Bachelor of Science in Office Administration
Bachelor of Science in Hospitality Management
Bachelor of Science in Tourism Management
College of Criminology
Bachelor of Science in Criminology
College of Education
Bachelor of Elementary Education
Bachelor of Secondary Early Childhood Education
Bachelor of Physical Education
Bachelor of Secondary Education
Major in: English, Filipino, Mathematics, Sciences, and Social Studies
College of Engineering and Architecture
Bachelor of Science in Civil Engineering
Bachelor of Science in Electronics Engineering
Bachelor of Science in Computer Engineering
Bachelor of Science in Architecture
Bachelor of Science in Geodetic Engineering
College of Information Technology
Bachelor of Science in Information Technology
Bachelor of Library and Information Science
College of Medicine and Allied Medical Programs
Bachelor of Science in Medical Laboratory Science
Bachelor of Science in Radiologic Technology
Bachelor of Science in Pharmacy
Bachelor of Science in Physical Therapy
College of Nursing, Public Health, and Midwifery
Bachelor of Science in Nursing
Bachelor of Science in Midwifery
Bachelor of Science in Public Health
College of Law
Juris Doctor (Bachelor of Laws)
Basic Education
Senior High
Grade 11
Grade 12
Junior High
Grade 1-6
Kinder
Nursery (Preschool)
References
External links
Catholic universities and colleges in the Philippines
Universities and colleges in Isabela (province)
Educational institutions established in 1951
Education in Santiago, Isabela
1951 establishments in the Philippines |
420214 | https://en.wikipedia.org/wiki/Expanded%20memory | Expanded memory | In DOS memory management, expanded memory is a system of bank switching that provided additional memory to DOS programs beyond the limit of conventional memory (640 KiB).
Expanded memory is an umbrella term for several incompatible technology variants. The most widely used variant was the Expanded Memory Specification (EMS), which was developed jointly by Lotus Software, Intel, and Microsoft, so that this specification was sometimes referred to as "LIM EMS". LIM EMS had several versions. The first widely implemented version was EMS 3.2, which supported up to 8 MiB of expanded memory and uses parts of the address space normally dedicated to communication with peripherals (upper memory) to map portions of the expanded memory. EEMS, an expanded-memory management standard competing with LIM EMS 3.x, was developed by AST Research, Quadram and Ashton-Tate ("AQA"); it could map any area of the lower 1 MiB. EEMS ultimately was incorporated in LIM EMS 4.0, which supported up to 32 MiB of expanded memory and provided some support for DOS multitasking as well. IBM, however, created its own expanded-memory standard called XMA.
The use of expanded memory became common with games and business programs such as Lotus 1-2-3 in the late 1980s through the mid-1990s, but its use declined as users switched from DOS to protected-mode operating systems such as Linux, IBM OS/2, and Microsoft Windows.
Background
The 8088 processor of the IBM PC and IBM PC/XT could address one megabyte (MiB, or 220 bytes) of memory. It inherited this limit from the 20-bit external address bus of the Intel 8086. The designers of the PC allocated the lower 640 KiB ( bytes) of address space for read-write program memory (RAM), called "conventional memory", and the remaining 384 KiB of memory space was reserved for uses such as the system BIOS, video memory, and memory on expansion peripheral boards.
Even though the IBM PC AT, introduced in 1984, used the 80286 chip that could address up to 16 MiB of RAM as extended memory, it could only do so in protected mode. The scarcity of software compatible with the 286 protected mode (no standard DOS applications could run in it) meant that the market was still open for another solution.
To fit potentially much more memory than the 384 KiB of free address space would allow, a bank switching scheme was devised, where only selected parts of the additional memory would be accessible at any given time. Originally, a single 64 KiB (216 bytes) window of memory, called a page frame, was possible; later this was made more flexible. Programs had to be written in a specific way to access expanded memory. The "window" between lower RAM and expanded RAM could be moved to different locations within the Expanded RAM.
A first attempt to use a bank switching technique was made by Tall Tree Systems with their JRAM boards, but these did not catch on. (Tall Tree Systems later made EMS-based boards using the same JRAM brand.)
Expanded Memory Specification (EMS)
Lotus Development, Intel, and Microsoft cooperated to develop the EMS standard (aka LIM EMS). The first publicly available version of EMS, version 3.0 allowed access of up to 4 MiB of expanded memory. This was increased to 8 MiB with version 3.2 of the specification. The final version of EMS, version 4.0 increased the maximum amount of expanded memory to 32 MiB and supported additional functionality.
Microsoft thought that bank switching was an inelegant and temporary, but necessary stopgap measure. Slamming his fist on the table during an interview Bill Gates said of expanded memory, "It's garbage! It's a kludge! … But we're going to do it". The companies planned to launch the standard at the Spring 1985 COMDEX, with many expansion-card and software companies announcing their support.
The first public version of the EMS standard, called EMS 3.0 was released in 1985; EMS 3.0, however, saw almost no hardware implementations before being superseded by EMS 3.2. EMS 3.2 used a 64 KiB region in the upper 384 KiB (upper memory area) divided into four 16 KiB pages, which could be used to map portions of the expanded memory.
In turn, EMS 3.2 was improved upon by a group of three other companies: AST Research, Quadram and Ashton-Tate, which created their own Enhanced EMS (EEMS) standard. EEMS allowed any 16 KiB region in lower RAM to be mapped to expanded memory, as long as it was not associated with interrupts or dedicated I/O memory such as network or video cards. Thus, entire programs could be switched in and out of the extra RAM. EEMS also added support for two sets of mapping registers. These features were used by early DOS multitasker software such as DESQview. Released in 1987, the LIM EMS 4.0 specification incorporated practically all features of EEMS.
A new feature added in LIM EMS 4.0 was that EMS boards could have multiple sets of page-mapping registers (up to 64 sets). This allowed a primitive form of DOS multitasking. The caveat was, however, that the standard did not specify how many register sets a board should have, so there was great variability between hardware implementations in this respect.
The Expanded Memory Specification (EMS) is the specification describing the use of expanded memory. EMS functions are accessible through software interrupt 67h. Programs using EMS must first establish the presence of an installed expanded memory manager (EMM) by checking for a device driver with the device name EMMXXXX0.
Expanded Memory Adapter (XMA)
IBM developed their own memory standard called Expanded Memory Adapter (XMA); the IBM DOS driver for it was XMAEM.SYS. Unlike EMS, the IBM expansion boards could be addressed both using an expanded memory model and as extended memory. The expanded memory hardware interface used by XMA boards is, however, incompatible with EMS, but a XMA2EMS.SYS driver provided EMS emulation for XMA boards. XMA boards were first introduced for the 1986 (revamped) models of the 3270 PC.
Implementations
Expansion boards
This insertion of a memory window into the peripheral address space could originally be accomplished only through specific expansion boards, plugged into the ISA expansion bus of the computer. Famous 1980s expanded memory boards were AST RAMpage, IBM PS/2 80286 Memory Expansion Option, AT&T Expanded Memory Adapter and the Intel Above Board. Given the price of RAM during the period, up to several hundred dollars per MiB, and the quality and reputation of the above brand names, an expanded memory board was very expensive.
Motherboard chipsets
Later, some motherboard chipsets of Intel 80286-based computers implemented an expanded memory scheme that did not require add-on boards, notably the NEAT chipset. Typically, software switches determined how much memory should be used as expanded memory and how much should be used as extended memory.
Device drivers
An expanded-memory board, being a hardware peripheral, needed a software device driver, which exported its services. Such a device driver was called expanded-memory manager. Its name was variable; the previously mentioned boards used REMM.SYS (AST), PS2EMM.SYS (IBM), AEMM.SYS (AT&T) and EMM.SYS (Intel) respectively. Later, the expression became associated with software-only solutions requiring the Intel 80386 processor, for example Quarterdeck's QEMM, Qualitas' 386MAX or the default EMM386 in MS-DOS, PC DOS and DR-DOS.
Software emulation
Beginning in 1986, the built-in memory management features of Intel 80386 processor freely modeled the address space when running legacy real-mode software, making hardware solutions unnecessary. Expanded memory could be simulated in software.
The first software expanded-memory management (emulation) program was CEMM, available in September 1986 as a utility for the Compaq Deskpro 386. A popular and well-featured commercial solution was Quarterdeck's QEMM. A contender was Qualitas' 386MAX. Functionality was later incorporated into MS-DOS 4.01 in 1989 and into DR DOS 5.0 in 1990, as EMM386.
Software expanded-memory managers in general offered additional, but closely related functionality. Notably, they allowed using parts of the upper memory area (UMA) (the upper 384 KiB of real-mode address space) called upper memory blocks (UMBs) and provided tools for loading small programs, typically TSRs inside ("LOADHI" or "LOADHIGH").
Interaction between extended memory, expanded-memory emulation and DOS extenders ended up being regulated by the XMS, Virtual Control Program Interface (VCPI), DOS Protected Mode Interface (DPMI) and DOS Protected Mode Services (DPMS) specifications.
Certain emulation programs, colloquially known as LIMulators, did not rely on motherboard or 80386 features at all. Instead, they reserved 64 KiB of the base RAM for the expanded memory window, where they copied data to and from either extended memory or the hard disk when application programs requested page switches. This was programmatically easy to implement, but performance was low. This technique was offered by AboveDisk from Above Software and by several shareware programs.
Decline
Expanded Memory usage declined in the 1990s. The IBM AT Intel 80286 supported 24 bits of address space (16 MiB) in protected mode, and the 386 supported 32-bit addresses, or 4 gigabytes (232) of RAM – 4096 times the addressable space of the original 8086. DOS itself did not directly support protected mode, but Microsoft eventually developed DPMI, and several DOS extenders were published based on it. DOS programs like Doom could use extenders like DOS/4G to run in protected mode while still using the DOS API. In the early 1990s new operating systems like Linux, Windows 9x, Windows NT, OS/2, and BSD/OS supported protected mode "out of the box". These and similar developments rendered Expanded Memory an obsolete concept.
See also
Conventional memory
DOS memory management
Extended memory (XMS)
High memory area (HMA)
Upper memory area (UMA)
Global EMM Import Specification (GEMMIS)
x86 memory segmentation
Address Windowing Extensions (AWE)
Physical Address Extension (PAE)
References
Further reading
X86 memory management
DOS memory management
Memory expansion |
39566700 | https://en.wikipedia.org/wiki/Landscape%20Express | Landscape Express | Landscape Express is a CAD software application for 2D and 3D design and drafting. It is used primarily by landscape designers. The software is developed, sold and supported by the British company 'Trial Systems Ltd' based in Burton-upon-Trent, Staffordshire. The software was first released in 2012, developed by Peter Boyce & Steven Pearce in conjunction with Anton Heymann. The software is based on the Drawing Express CAD system which utilizes tablet and pen interface. A graphics tablet, pen and overlay are used to select, use and manipulate commands thus mimicking the draughtsman's drawing board. This differs from the traditional CAD software ‘drop-down’ menu structures on-screen as the menu system is laid out in front of the user. The method of drawing in this way is aimed at being intuitive allowing the user to create and amend drawings as quickly as possible.
3D
The 3D commands for the system are modular and so the system can be used with or without them available. A 3D model can be created while displaying it in rendered mode in 'real time'. The 3D model is automatically created from the 2D plans including all walls, openings, and roof. Other information can be added to the model such as sky, landscaping, people and cars for detail. The model can be exported to POV-Ray and rendered off inside the raytracer software giving more resolution and detail.
Platforms and license types
Supported platforms
Landscape Express is Windows based software which runs on Windows 2000, XP, Vista, Windows 7 and Windows 8. The software works on 32-bit and 64-bit versions of the Windows operating system.
Landscape Express can be run on a Mac with Windows installed. This is possible on any Intel based Mac running OS X 10.5, 10.6, 10.7 or 10.8 versions of the Mac OS X operating system, through the use of VMware Fusion or Parallels Desktop- virtual machine software. Landscape Express can also be installed on a Bootcamp partition.
License types
Landscape Express requires a USB dongle to be present on the system it is running on. This is the license for the software and without a dongle present on the system, the program will not open or save drawings.
Data interchange
Landscape Express drawing file's use a .EXP file extension. The software can import and export DWG and DXF files amongst others. Drawings can also be saved to PDF format using any available PDF converter. Images can be imported and exported. Importing works with most mainstream image file formats.
Version history
2013 - Landscape Express
See also
CAD
Comparison of CAD Software
References
External links
Landscape Express Webpage
Trial Systems Ltd
William Sutherland Architect, CAD - a basic guide
Computer-aided design software
Computer-aided design software for Windows
3D graphics software
2012 software |
8665933 | https://en.wikipedia.org/wiki/Navisworks | Navisworks | Navisworks (known for a while as JetStream) is a 3D design review package for Microsoft Windows.
Used primarily in construction industries to complement 3D design packages (such as Autodesk Revit, AutoCAD, and MicroStation), Navisworks allows users to open and combine 3D models; navigate around them in real-time (without the WASD possibility); and review the model using a set of tools including comments, redlining, viewpoint, and measurements. A selection of plug-ins enhances the package adding interference detection, 4D time simulation, photorealistic rendering and PDF-like publishing.
The software was originally created by Sheffield, UK based developer NavisWorks (a subsidiary of Lightwork Design). NavisWorks was purchased by Autodesk for $25 million on June 1, 2007.
Components
Navisworks (formerly JetStream) is built around a core module called Roamer and has a number of built-in functionalities:
Roamer - The core part allows users to open models from a range of 3D design and laser scan formats and combine them into a single 3D model. Users can then navigate around the model in real-time and review the model with a range of mark-up tools.
Publisher - This allows users to publish the complete 3D model into a single NWD file that can be freely opened by anyone using Freedom, a free viewer.
Clash Detective - A functionality to enable interference detection. This means users can select parts of the model and look for places where the geometry conflicts. This is for finding faults in the design.
Renderer (formerly Presenter) - With the Renderer, users can apply materials and lighting to the model and produce photorealistic images and animations.
Quantification - By "taking off" the model, users can automatically make material estimates, measure areas and count building components.
TimeLiner - Adds 4D simulation so the user can link geometry to times and dates and to simulate the construction or demolition of the model over time. Also links with project scheduling software (Such as Microsoft Project or Primavera products) to import task data.
Animator - A feature that allows the users to animate the model and interact with it.
Scripter - This allows the user to set up a collection of actions that he/she want to happen when certain events conditions are met.
File format support
Navisworks Simulate and Manage are most notable for its support for a wide range of design file formats. Formats natively supported include:
NavisWorks - .nwd, .nwf, .nwc (all versions, no full backward compatibility)
AutoCAD Drawing - .dwg, .dxf (up to AutoCAD 2018)
MicroStation (SE, J, V8, & XM) - .dgn, .prp, prw (up to v7, & v8)
3D Studio Max - .3ds, .prj (up to 3ds Max 2018)
ACIS SAT - .sat, .sab (all ASM SAT, up to ASM SAT v7)
DWF - .dwf, .dwfx (all versions)
CATIA - .model, session, .exp, dlv3, .CATPart, .CATProduct, .cgr (up to v4, & v5)
IFC - .ifc (IFC2X_PLATFORM, IFC2X_FINAL, IFC2X2_FINAL, IFC2X3, IFC4)
IGES - *.igs*, *.iges* (all versions)
Informatix/MicroGDS - .man, .cv7 (v10)
Inventor - .ipt, .iam, .ipj (up to Inventor 2018)
CIS/2 - .stp (STRUCTURAL_FRAME_SCHEMA)
JT Open - .jt (up to v10)
NX - .prt (up to v9)
Revit - .rvt (up to 2011–2022)
RVM - .rvm (up to v12.0 SP5)
SketchUp - .skp (v5 up to 2015)
PDS Design Review - .dri (legacy file format, support up to 2007)
STL - .stl (binary only)
VRML - .wrl, .wrz (VRML1, VRML2)
Parasolid - .x_b (up to schema 26)
FBX - .fbx (FBX SDK 2017)
Pro/ENGINEER - .prt, .asm, .g, .neu (Wildfire v5, Creo Parametric v1-v3)
STEP - .stp, .step (AP214, AP203E3, AP242)
Solidworks - .prt, .sldprt, .asm, .sldasm (2001, plus 2015)
PDF - .pdf (all versions)
Rhino - .3dm (up to v5)
Solid Edge - .stp, .prt
Additional products that are supported through Autodesk, and third parties:
Revit
MicroStation
3DS Max
ArchiCAD
References
External links
Autodesk products
3D graphics software
BIM software
Building information modeling
Computer-aided design software
Windows graphics-related software |
18119717 | https://en.wikipedia.org/wiki/Web%20typography | Web typography | Web typography refers to the use of fonts on the World Wide Web. When HTML was first created, font faces and styles were controlled exclusively by the settings of each web browser. There was no mechanism for individual Web pages to control font display until Netscape introduced the font element in 1995, which was then standardized in the HTML 3.2 specification. However, the font specified by the font element had to be installed on the user's computer or a fallback font, such as a browser's default sans-serif or monospace font, would be used. The first Cascading Style Sheets specification was published in 1996 and provided the same capabilities.
The CSS2 specification was released in 1998 and attempted to improve the font selection process by adding font matching, synthesis and download. These techniques did not gain much use, and were removed in the CSS2.1 specification. However, Internet Explorer added support for the font downloading feature in version 4.0, released in 1997. Font downloading was later included in the CSS3 fonts module, and has since been implemented in Safari 3.1, Opera 10 and Mozilla Firefox 3.5. This has subsequently increased interest in Web typography, as well as the use of font downloading.
CSS1
In the first CSS specification, authors specified font characteristics via a series of properties:
All fonts were identified solely by name. Beyond the properties mentioned above, designers had no way to style fonts, and no mechanism existed to select fonts not present on the client system.
Web-safe fonts
Web-safe fonts are fonts likely to be present on a wide range of computer systems, and used by Web content authors to increase the likelihood that content displays in their chosen font. If a visitor to a Web site does not have the specified font, their browser tries to select a similar alternative, based on the author-specified fallback fonts and generic families or it uses font substitution defined in the visitor's operating system.
Microsoft's Core fonts for the Web
To ensure that all Web users had a basic set of fonts, Microsoft started the Core fonts for the Web initiative in 1996 (terminated in 2002). Released fonts include Arial, Courier New, Times New Roman, Comic Sans, Impact, Georgia, Trebuchet, Webdings and Verdana—under an EULA that made them freely distributable but also limited some rights to their use. Their high penetration rate has made them a staple for Web designers. However, most Linux distributions don't include these fonts by default.
CSS2 attempted to increase the tools available to Web developers by adding font synthesis, improved font matching and the ability to download remote fonts.
Some CSS2 font properties were removed from CSS2.1 and later included in CSS3.
Fallback fonts
The CSS specification allows for multiple fonts to be listed as fallback fonts. In CSS, the font-family property accepts a list of comma-separated font faces to use, like so:
font-family: "Nimbus Sans L", Helvetica, Arial, sans-serif;
The first font specified is the preferred font. If this font is not available, the Web browser attempts to use the next font in the list. If none of the fonts specified are found, the browser displays its default font. This same process also happens on a per-character basis if the browser tries to display a character not present in the specified font.
Generic font families
To give Web designers some control over the appearance of fonts on their Web pages, even when the specified fonts are not available, the CSS specification allows the use of several generic font families. These families are designed to split fonts into several categories based on their general appearance. They are commonly specified as the last in a series of fallback fonts, as a last resort in the event that none of the fonts specified by the author are available. For several years, there were five generic families:
Sans-serif
Fonts that do not have decorative markings, or serifs, on their letters. These fonts are often considered easier to read on screens.
Serif
Fonts that have decorative markings, or serifs, present on their characters. These fonts are traditionally used in printed books.
Monospace
Fonts in which all characters are equally wide.
Cursive
Fonts that resemble cursive writing. These fonts may have a decorative appearance, but they can be difficult to read at small sizes, so they are generally used sparingly.
Fantasy
Fonts that may contain symbols or other decorative properties, but still represent the specified character.
CSS fonts working draft 4 with lesser browser support
Default fonts on a given system: the purpose of this option is to allow web content to integrate with the look and feel of the native OS.
Default fonts on a given system in a serif style
Default fonts on a given system in a sans-serif style
Default fonts on a given system in a monospace style
Default fonts on a given system in a rounded style
Fonts using emoji
Fonts for complex mathematical formula and expressions.
Chinese typefaces that are between serif Song and cursive Kai forms. This style is often used for government documents.
Web fonts
History
A technique to refer to and automatically download remote fonts was first specified in the CSS2 specification, which introduced the @font-face construct. At the time, fetching font files from the web was controversial because fonts meant to be used only for certain web pages could also be downloaded and installed in breach of the font license.
Microsoft first added support for downloadable EOT fonts in Internet Explorer 4 in 1997. Authors had to use the proprietary WEFT tool to create a subsetted font file for each page. EOT showed that webfonts could work and the format saw some use in writing systems not supported by common operating systems. However, the format never gained widespread acceptance and was ultimately rejected by W3C.
In 2006, Håkon Wium Lie started a campaign against using EOT and rather have web browsers support commonly used font formats. Support for the commonly used TrueType and OpenType font formats has since been implemented in Safari 3.1, Opera 10, Mozilla Firefox 3.5 and Internet Explorer 9.
In 2010, the WOFF compression method for TrueType and OpenType fonts was submitted to W3C by the Mozilla Foundation, Opera Software and Microsoft, and browsers have since added support.
Google Fonts was launched in 2010 to serve webfonts under open-source licenses. By 2016, more than 800 webfont families are available.
Webfonts have become an important tool for web designers and as of 2016 a majority of sites use webfonts.
File formats
By using a specific CSS @font-face embedding technique it is possible to embed fonts such that they work with IE4+, Firefox 3.5+, Safari 3.1+, Opera 10+ and Chrome 4.0+. This allows the vast majority of Web users to access this functionality. Some commercial foundries object to the redistribution of their fonts. For example, Hoefler & Frere-Jones says that, while they "...enthusiastically [support] the emergence of a more expressive Web in which designers can safely and reliably use high-quality fonts online," the current delivery of fonts using @font-face is considered "illegal distribution" by the foundry and is not permitted. Instead, Hoefler & Co. offer a proprietary font delivery system rooted in the cloud. Many other commercial type foundries address the redistribution of their fonts by offering a specific license, known as a web font license, which permits the use of the font software to display content on the web, a use normally prohibited by basic desktop licenses. Naturally this does not interfere with fonts and foundries under free licences.
TrueDoc
TrueDoc, while not specifically a webfont specification, was the first standard for embedding fonts. It was developed by the type foundry Bitstream in 1994, and became natively supported in Netscape Navigator 4, in 1996. Due to open source license restrictions, with Netscape unable to release Bitstream's source code, native support for the technology ended when Netscape Navigator 6 was released. An ActiveX plugin was available to add support for TrueDoc to Internet Explorer, but the technology had to compete against Microsoft's Embedded OpenType fonts, which had natively supported in their Internet Explorer browser since version 4.0. Another impediment was the lack of open-source or free tool to create webfonts in TrueDoc format, whereas Microsoft made available a free Web Embedding Fonts Tool to create webfonts in their format.
Embedded OpenType
Internet Explorer has supported font embedding through the proprietary Embedded OpenType standard since version 4.0. It uses digital rights management techniques to help prevent fonts from being copied and used without a license. A simplified subset of EOT has been formalized under the name of CWT (Compatibility Web Type, formerly EOT-Lite)
Scalable Vector Graphics
Web typography applies to SVG in two ways:
All versions of the SVG 1.1 specification, including the SVGT subset, define a font module allowing the creation of fonts within an SVG document. Safari introduced support for many of these properties in version 3. Opera added preliminary support in version 8.0, with support for more properties in 9.0.
The SVG specification lets CSS apply to SVG documents in a similar manner to HTML documents, and the @font-face rule can be applied to text in SVG documents. Opera added support for this in version 10, and WebKit since version 325 also supports this method using SVG fonts only.
Scalable Vector Graphics Fonts
SVG fonts was a W3C standard of fonts using SVG graphic that became a subset of OpenType fonts. It allowed multicolor or animated fonts. It was first a subset of SVG 1.1 specifications but it has been deprecated in the SVG 2.0 specification. The SVG fonts as independent format is supported by most browsers apart from IE and Firefox, and is deprecated in Chrome (and Chromium). That's now generally deprecated; the standard that most browser vendor agreed with is SVG font subset included in OpenType (and then WOFF superset, see below), called SVGOpenTypeFonts. Firefox has supported SVG OpenType since Firefox 26.
TrueType/OpenType
Linking to industry-standard TrueType (TTF) and OpenType (TTF/OTF) fonts is supported by
Mozilla Firefox 3.5+, Opera 10+, Safari 3.1+, and Google Chrome 4.0+. Internet Explorer 9+ supports only those fonts with embedding permissions set to installable.
Web Open Font Format
The Web Open Font Format (WOFF) is essentially OpenType or TrueType with compression and additional metadata. WOFF is supported by Mozilla Firefox 3.6+, Google Chrome 5+,
Opera Presto,
and is supported by Internet Explorer 9 (since March 14, 2011). Support is available on Mac OS X Lion's Safari from release 5.1.
Unicode fonts
Only two fonts available by default on the Windows platform, Microsoft Sans Serif and Lucida Sans Unicode, provide a wide Unicode character repertoire. A bug in Verdana (and the different handling of it by various user agents) hinders its usability where combining characters are desired.
On free and open-source software platforms such as Linux, GNU Unifont and GNU FreeFont provide a wide range of Unicode character.
Alternatives
A common hurdle in Web design is the design of mockups that include fonts that are not Web-safe. There are a number of solutions for situations like this. One common solution is to replace the text with a similar Web-safe font or use a series of similar-looking fallback fonts.
Another technique is image replacement. This practice involves overlaying text with an image containing the same text written in the desired font. This is good for aesthetic purposes, but prevents text selection, increases bandwidth use, is bad for search engine optimization, and makes the text inaccessible for users with disabilities.
Also common is the use of Flash-based solutions such as sIFR. This is similar to image replacement techniques, though the text is selectable and rendered as a vector. However, this method requires the presence of a proprietary plugin on a client's system.
Another solution is using Javascript to replace the text with VML (for Internet Explorer) or SVG (for all other browsers).
Font hosting services allow users to pay a subscription to host non-Web-safe fonts online. Most services host the font for the user and provide the necessary @font-face CSS declaration.
An example of a CSS @font-face setup:
@font-face {
font-family: 'Journal';
src: url('http://your-own.site/fonts/journal/journal.woff') format('woff'),
url('http://your-own.site/fonts/journal/journal.svg#Journal') format('svg'),
url('http://your-own.site/fonts/journal/journal.ttf') format('truetype'),
url('http://your-own.site/fonts/journal/journal.eot'),
url('http://your-own.site/fonts/journal/journal.eot?#iefix') format('embedded-opentype');
font-weight: normal;
font-style: normal;
}
Practical considerations
In practice, it matters not only what web browser the audience is using but also how their operating system is configured. In 2010, type designer and consultant Thomas Phinney (Vice President of FontLab and formerly with Adobe) wrote a step-by-step process for finding the best rendering solution, which—more or less jokingly—uses a large number of goto statements. A more visually oriented flow chart was posted in the same year on the Typophile forum by Miha Zajec.
See also
Scalable Inman Flash Replacement
List of RFC as mentioned in WOFF (draft of 2009-10-23):
ZLIB Compressed Data Format Specification
Key words for use in RFCs to Indicate Requirement Levels
Matching of Language Tags
Notes
References
External links
W3C CSS Fonts Specification
Typoscan is a designer tool helping you to scan the typography of any website in less than a second.
Digital typography
Web design
World Wide Web |
4220348 | https://en.wikipedia.org/wiki/List%20of%20English%20inventions%20and%20discoveries | List of English inventions and discoveries | English inventions and discoveries are objects, processes or techniques invented, innovated or discovered, partially or entirely, in England by a person from England. Often, things discovered for the first time are also called inventions and in many cases, there is no clear line between the two. Nonetheless, science and technology in England continued to develop rapidly in absolute terms. Furthermore, according to a Japanese research firm, over 40% of the world's inventions and discoveries were made in the UK, followed by France with 24% of the world's inventions and discoveries made in France and followed by the US with 20%.
The following is a list of inventions, innovations or discoveries known or generally recognised to be English.
Agriculture
1627: Publication of first experiments in Water desalination and filtration by Sir Francis Bacon (1561–1626).
1701: Seed drill improved by Jethro Tull (1674–1741).
18th century: of the horse-drawn hoe and scarifier by Jethro Tull
1780s: Selective breeding and artificial selection pioneered by Robert Bakewell (1725–1795).
1842: Superphosphate or chemical fertilizer developed by John Bennet Lawes (1814–1900).
1850s: Steam-driven ploughing engine invented by John Fowler (1826–1864).
1901: First commercially successful light farm-tractor invented by Dan Albone (1860–1906).
1930s onwards: Developments in dairy farming systems pioneered by Rex Paterson (1902–1978).
Ceramics
1748: Fine porcelain developed by Thomas Frye (c. 1710–1762), of Bow porcelain factory, London. Cf. Frye's rivals at Chelsea porcelain factory.
1770s: Jasperware developed by Josiah Wedgwood (1730–1795).
1789–1793: Bone china created by Josiah Spode (1733–1797).
1813: Ironstone china invented by Charles James Mason (1791–1856).
Clock making
Anglo-Saxon times: type of candle clock invented by Alfred the Great (849–899).
c. 1657: Anchor escapement probably invented by Robert Hooke (1635–1703).
c. 1657: Balance spring added to balance wheel by Robert Hooke (1635–1703).
c. 1722: Grasshopper escapement invented by John Harrison (1693–1776); Harrison created the H1, H2, H3 & H4 watches (to solve the longitude measurement problem).
c. 1726: Gridiron pendulum invented by John Harrison (1693–1776).
c. 1755: Lever escapement, the greatest single improvement ever applied to pocket watches, invented by Thomas Mudge (1715–1794).
1761: First true Marine chronometer perfected by John Harrison (1693–1776).
1923: Self-winding watch invented by John Harwood (1893–1964).
1955: First accurate atomic clock invented by Louis Essen (1908–1997).
1976: Coaxial escapement mechanism invented by George Daniels (1926–2011).
Clothing manufacturing
1589: The stocking frame, a mechanical knitting machine used in the textiles industry, invented by William Lee (1563–1614).
1733: The flying shuttle, a key development in the industrialization of weaving during the early Industrial Revolution, invented by John Kay of Walmersley (1704-c. 1779).
1759: The Derby Rib machine (for stocking manufacture) invented by Jedediah Strutt (1726–1797).
1764: The spinning jenny invented by James Hargreaves (c. 1720–1778).
1767: Spinning frame invented by John Kay of Warrington.
1769: The water frame, a water-powered spinning frame, developed by Richard Arkwright (1732–1792).
1775–1779: Spinning mule invented by Samuel Crompton (1753–1827).
1784: Power loom invented by Edmund Cartwright (1743–1823).
1790: Sewing machine invented by Thomas Saint.
1808: The bobbinet, a development on the warp-loom, invented by John Heathcoat (1783–1861).
1856: Mauveine, the first synthetic organic dye, discovered by William Henry Perkin (1838–1907).
1941: Polyester invented by John Rex Whinfield (1901–1966).
Communications
Pre-1565: The pencil invented in Seathwaite, Borrowdale, Cumbria, using Grey Knotts graphite.
1588: Modern shorthand invented by Timothy Bright (1551?–1615).
1661: The postmark (called the "Bishop Mark") introduced by English Postmaster General Henry Bishop (1611–1691/2).
1667: Tin can telephone, a device that conveyed sounds over an extended wire by mechanical vibrations, invented by Robert Hooke (1635–1703).
1714: Patent for an apparatus regarded as the first typewriter granted to Henry Mill (c. 1683–1771).
18th century: The Valentine's card first popularised.
1822: The mechanical pencil patented by Sampson Mordan (1790–1843) and John Isaac Hawkins (1772–1855).
1831: Electromagnetic induction & Faraday's law of induction. Began as a series of experiments by Michael Faraday (1791–1867); later became some of the first experiments in the discovery of radio waves and the development of radio.
1837: The first commercially successful electric telegraph developed by Sir Charles Wheatstone (1802–1875) and Sir William Fothergill Cooke (1806–1879).
1837: Pitman Shorthand invented by Isaac Pitman (1813–1897).
1840: Uniform Penny Post and postage stamp invented by Sir Rowland Hill (1795–1879).
1843: The Christmas card introduced commercially by Sir Henry Cole (1808–1882).
1873: Discovery of the photoconductivity of the element selenium by Willoughby Smith (1828–1891). Smith's work led to the invention of photoelectric cells (solar panels), including those used in the earliest television systems.
1879: The first radio transmission, using a spark-gap transmitter (achieving a range of approximately 500 metres), made by David E. Hughes (1831–1900).
1888: The world's first moving picture film produced by Louis Le Prince (1841 – vanished 16 September 1890) of Roundhay Garden, Leeds Bridge.
1897: The world's first radio station was located at The Needles Batteries on the western tip of the Isle of Wight; it was set up by Marconi.
1899: The world's first colour motion picture film produced by Edward Raymond Turner (1873–1903).
1902: Proposition by Oliver Heaviside (1850–1925) of the existence of the Kennelly–Heaviside layer, a layer of ionised gas that reflects radio waves around the Earth's curvature.
1912: Development of radio communication pioneered by William Eccles (1875–1966).
1914: The world's first automatic totalisator invented by English-born George Julius (1873–1946).
2 December 1922: Mechanical scanning device (a precursor to modern television) demonstrated in Sorbonne, France by Englishman Edwin Belin.
1930: The Plessey company in England began manufacturing the Baird Televisor receiver: the first television receiver sold to the public.
1931: Stereophonic sound or, more commonly, stereo invented at EMI in Hayes, Middlesex by Alan Blumlein (1903–1942).
1933: The 405-line television system (the first fully electronic television system used in regular broadcasting) developed at EMI in Hayes, Middlesex by Alan Blumlein (1903–1942), under the supervision of Sir Isaac Shoenberg.
1936: The world's first regular public broadcasts of high-definition television began from Alexandra Palace, North London by the BBC Television Service.
1930s: Radar pioneered at Bawdsey Manor by Scotsman Robert Watson-Watt (1892–1973) and Englishman Henry Tizard (1885–1939).
1945: The concept of geostationary satellites for the use of telecommunications relays popularised by Arthur C. Clarke (1917–2008).
1964 onwards: Use of fibre optics in telecommunications pioneered by Englishman George Hockham (1938–2013) and Chinese-born Charles K. Kao.
Late 1960s: Development of the long-lasting materials that made liquid crystal displays possible. Team headed by Sir Brynmor Jones; developed by Scotsman George Gray and Englishman Ken Harrison in conjunction with the Royal Radar Establishment and the University of Hull, who ultimately discovered the crystals used in LCDs.
1970: The MTV-1, the first near pocket-sized handheld television, developed by Sir Clive Sinclair (born 1940).
1973: First transmissions of the Teletext information service made by the British Broadcasting Corporation.
1992: Clockwork radio invented by Trevor Baylis (1937–2018).
3 December 1992: The world's first text/SMS message ("Merry Christmas") sent over the Vodafone GSM network by Neil Papworth (born 1969).
2016: Holographic TV device created by the BBC.
Computing
1822: The Difference Engine, an automatic mechanical calculator designed to tabulate polynomial functions, proposed by Charles Babbage (1791–1871).
1837: The Analytical Engine, a proposed mechanical general-purpose computer, designed by Charles Babbage (1791–1871).
1842: The person regarded as the first computer programmer was Ada Lovelace (1815–1852), only legitimate child of the poet Byron and his wife Anne Isabella Milbanke, Baroness Wentworth.
1842: First programming language, the Analytical Engine order code, produced by Charles Babbage (1791–1871) and Ada Lovelace (1815–1852).
1854: Boolean algebra, the basis for digital logic, conceived by George Boole (1815–1864).
1912: Argo system, the world's first electrically powered mechanical analogue computer, invented by Arthur Pollen (1866–1937).
1918: The flip-flop circuit, which became the basis of electronic memory (Random-access memory) in computers, invented by William Eccles (1875–1966) and F. W. Jordan (1882–?).
1936–1937: The Universal Turing machine invented by Alan Turing (1912–1954). The UTM is considered to be the origin of the stored programme computer used in 1946 for the "Electronic Computing Instrument" that now bears John von Neumann's name: the Von Neumann architecture.
1939: The Bombe, a device used by the British to decipher German secret messages during World War II, invented by Alan Turing (1912–1954).
1943–1944: The Colossus computer – the world's first programmable, electronic, digital computer – invented by Tommy Flowers (1905–1988).
1946–1950: ACE and Pilot ACE invented by Alan Turing (1912–1954).
1946–1947: The Williams tube, a cathode ray tube used to store electronically (500 to 1,000 bits of) binary data, developed by Frederic Calland Williams (1911–1977) and Tom Kilburn (1921–2001).
1948: The Manchester Baby – the world's first electronic stored-programme computer – built by Frederic Calland Williams (1911–1977) and Tom Kilburn (1921–2001) at the Victoria University of Manchester.
1949: The Manchester Mark 1 computer developed by Frederic Calland Williams (1911–1977) and Tom Kilburn (1921–2001); historically significant because of its pioneering inclusion of index registers.
1949: EDSAC – the first complete, fully functional computer inspired by the von Neumann architecture, the basis of every modern computer – constructed by Maurice Wilkes (1913–2010).
Late 1940s/early 1950s: The integrated circuit, commonly called the microchip, conceptualised and built by Geoffrey Dummer (1909–2002).
February 1951: The Ferranti Mark 1 (a.k.a. the Manchester Electronic Computer), the world's first successful commercially available general-purpose electronic computer, invented by Frederic Calland Williams (1911–1977) and Tom Kilburn (1921–2001).
1951: The first known recordings of computer generated music played on the Ferranti Mark 1 computer using a programme designed by Christopher Strachey (1916–1975).
1951: LEO made history by running the first business application (payroll system) on an electronic computer for J. Lyons and Co. Under the advice of Maurice Wilkes (1913–2010), LEO was designed by John Pinkerton (1919–1997) and David Caminer (1915–2008).
1951: Concept of microprogramming developed by Maurice Wilkes (1913–2010) from the realisation that the Central Processing Unit (CPU) of a computer could be controlled by a miniature, highly specialised computer programme in high-speed ROM.
1952: Autocode developed by Alick Glennie (1925–2003) for the Manchester Mark 1 computer; Autocode is regarded as the first computer compiler.
1952: The first graphical computer game, OXO or Noughts and Crosses, programmed on the EDSAC at Cambridge University as part of a Ph.D. thesis by A.S. Douglas (1921–2010).
1952: First trackball built by Tom Cranston, Fred Longstaff and Kenyon Taylor (1908–1996); invented 1947 by Ralph Benjamin.
1956 onwards: Metrovick 950, the first commercial transistor computer, built by the Metropolitan-Vickers Company of Manchester.
1958: EDSAC 2, the first computer to have a microprogrammed (Microcode) control unit and a bit slice hardware architecture, developed by a team headed by Maurice Wilkes (1913–2010).
1961: The Sumlock ANITA calculator, the world's first all-electronic desktop calculator, designed and built by the Bell Punch Company of Uxbridge.
1962: The Atlas computer – arguably the world's first supercomputer, and fastest computer in the world until the American CDC 6600 – developed by a team headed by Tom Kilburn (1921–2001). Introduced modern architectural concepts: spooling, interrupts, instruction pipelining, interleaved memory, virtual memory, and paging.
Late 1960s: Denotational semantics originated in the work of Christopher Strachey (1916–1975), a pioneer in programming language design.
1970: Packet switching co-invented by Welsh engineer Donald Davies (1924–2000) and Polish-born Paul Baran; it was Davies who coined the term packet switching at the National Physical Laboratory in London.
1972: The Sinclair Executive, the world's first small electronic pocket calculator, produced by Sir Clive Sinclair (born 1940).
1979: The first laptop computer, the GRiD Compass, designed by Bill Moggridge (1943–2012).
1979: Digital audio player (MP3 Player) invented by Kane Kramer (born 1956). His first investor was Sir Paul McCartney.
1980–1982: Home computers the Sinclair ZX80, ZX81 and ZX Spectrum produced by Sir Clive Sinclair (born 1940).
1981: The Osborne 1 – the first commercially successful portable computer, precursor to the laptop computer – developed by English-American Adam Osborne (1939–2003).
1982: 3D Monster Maze, widely considered the first survival horror computer game, developed from an idea by J. K. Greye and programmed by Malcolm Evans (b. 1944).
1984: The world's first pocket computer, the (Psion Organiser), launched by London-based Psion PLC.
1984: Elite, the world's first computer game with 3D graphics, developed by David Braben (born 1964) and Ian Bell (born 1962).
1985: ARM architecture introduced by Cambridge computer manufacturer Acorn Computers; the ARM CPU design is the microprocessor architecture of 98% of mobile phones and every smartphone.
1989: World Wide Web invented by Sir Tim Berners-Lee (born 1955).
1989: HTTP application protocol and HTML markup language developed by Sir Tim Berners-Lee (born 1955).
1989: Launch of the first PC-compatible palmtop computer, the (Atari Portfolio), designed by Ian H. S. Cullimore.
1989: First touchpad pointing device developed for London-based Psion PLC's Psion MC 200/400/600/WORD Series.
1990: The world's first web browser invented by Sir Tim Berners-Lee (born 1955). Initially called WorldWideWeb, it ran on the NeXTSTEP platform, and was renamed Nexus in order to avoid confusion with the World Wide Web.
1990: The world's first web server invented by Sir Tim Berners-Lee. Initially called WWWDaemon, it ran on the NeXTSTEP platform and it was publicly released in 1991; later it evolved and it was known as CERN httpd.
1991 onwards: Linux kernel development and maintenance were greately helped by English-born Andrew Morton (born 1959) and Alan Cox (born 1968).
2002: Wolfram's 2-state 3-symbol Turing machine proposed by London-born Stephen Wolfram (born 1959).
2012: Launch of the Raspberry Pi, a modern single-board computer for education, designed and built by Cambridgeshire-based charity Raspberry Pi Foundation.
Criminology
1836: Marsh test (used for detecting arsenic poisoning) invented by James Marsh (1794–1846).
1888–1895: Fingerprint classification method developed by Sir Francis Galton (1822–1911); a breakthrough in forensic science.
1910: First use of wireless telegraphy in the arrest of a criminal, Dr Crippen.
1984: DNA fingerprints are discovered by Alec Jeffreys (born 1950).
1987: Process of DNA profiling developed by Alec Jeffreys (born 1950).
1991: Iris recognition algorithm invented by Swede John Daugman working at the University of Cambridge.
1995: World's first national flop DNA database developed: National DNA Database.
Cryptography
1605: Bacon's cipher devised by Sir Francis Bacon (1561–1626).
1854: The Playfair cipher, the first literal digraph substitution cipher, invented by Charles Wheatstone (1802–1875).
1941: Codebreaker Bill Tutte (1917–2002) developed the Cryptanalysis of the Lorenz cipher, which Hitler used to communicate with his generals in World War II.
1973: Clifford Cocks (born 1950) first developed what came to be known as the RSA cipher at GCHQ, approximately three years before it was rediscovered by Rivest, Shamir, and Adleman at MIT.
Engineering
1600: The first electrical measuring instrument, the electroscope, invented by William Gilbert (1544–1603).
1676–1678: First working universal joint devised by Robert Hooke (1635–1703).
1698: First working steam pump invented by Thomas Savery (c. 1650–1715).
1709: First coke-consuming blast furnace developed by Abraham Darby I (1678–1717).
1712: Atmospheric steam engine invented by Thomas Newcomen (1664–1729).
1739: Screw-cutting lathe invented by Henry Hindley (1701–1771).
1770s: Continuous track first conceived by Anglo-Irish Richard Lovell Edgeworth (1744–1817).
1780: Modified version of the Newcomen engine (the Pickard engine) developed by James Pickard (dates unknown).
1781: The Iron Bridge, the first metal bridge, cast and built by Abraham Darby III (1750–1789).
1791: The first true gas turbine invented by John Barber (1734–1801).
1796–97: The first iron-framed building (and therefore forerunner of the skyscraper) – Ditherington Flax Mill in Shrewsbury, Shropshire – built by Charles Bage (1751–1822).
1800: First industrially practical screw-cutting lathe developed by Henry Maudslay (1771–1831).
1806: The Fourdrinier machine, a papermaking machine, invented by Henry Fourdrinier (1766–1854).
1823: First internal combustion engine to be applied industrially patented by Samuel Brown (?–1849).
1826: Continuous track (under the name "universal railway") patented by Sir George Cayley (1773–1857).
1830: First (toroidal, closed-core) electric transformer invented by Michael Faraday (1791–1867).
1831: First Electrical generator (or dynamo), the Faraday disk, invented by Michael Faraday.
1834–1878: Water and sewerage systems for over thirty cities across Europe designed by William Lindley (1808–1900).
1840s: The linear motor, a multi-phase alternating current (AC) electric motor, proposed by Charles Wheatstone (1802–1875); 1940s: developed by Eric Laithwaite (1921–1997).
1841: Widely accepted standard for screw threads devised by Joseph Whitworth (1803–1887).
1842: The adjustable spanner invented by Edwin Beard Budding (1796–1846).
1845: Hydraulic crane developed by William Armstrong (1810–1900); in 1863, Armstrong also built the first house in the world powered by hydroelectricity, at Cragside, Northumberland.
1846: The first fireproof warehousing complex – Albert Dock, Liverpool – designed by Jesse Hartley (1780–1860).
1848: The Francis turbine developed by James B. Francis (1815–1892), born near Witney, Oxfordshire.
1868: First commercial steel alloy produced by Robert Forester Mushet (1811–1891).
1869–1875: Crookes tube, the first cathode ray tube, invented by William Crookes (1832–1919).
1871: First enclosed wind tunnel invented, designed and operated by Francis Herbert Wenham (1824–1908).
1872: The Carey Foster bridge, a type of bridge circuit, invented by Carey Foster (1835–1919).
1880–1883: The Wimshurst machine, an Electrostatic generator for producing high voltages, developed by James Wimshurst (1832–1903).
1884: Steam turbine invented by Charles Algernon Parsons (1854–1913).
1885: Compression ignition engine (a.k.a. the diesel engine) invented by Herbert Akroyd Stuart (1864–1927).
1886: Prototype hot bulb engine or heavy oil engine built by Herbert Akroyd Stuart (1864–1927).
1889: Two-stroke engine invented by Joseph Day (1855–1946).
1890: Opening of the Forth Bridge – monumental cantilever railway bridge, and icon of Scotland – designed and engineered by English civil engineers Benjamin Baker (1840–1907) and John Fowler (1817–1898).
1902: Disc brakes patented by Frederick W. Lanchester (1868–1946).
1904: Vacuum tube (or valve) invented by John Ambrose Fleming (1849–1945).
1907: First reported observation of electroluminescence from a diode by H. J. Round (1881–1966); Round's discovery led to the creation of the light-emitting diode.
1917 onwards: Radio guidance systems pioneered by Archibald Low (1888–1956).
1935: Arnold Frederic Wilkins (1907–1985) contributed to the development of radar.
1940: Cavity magnetron improved by John Randall (1905–1984) and Harry Boot (1917–1983); consequently a critical component in microwave ovens and some radar.
Late-1940s/early 1950s: The microchip invented by Geoffrey W.A. Dummer (1909–2002).
1963: High strength carbon fibre invented at the Royal Aircraft Establishment in 1963. January 1969: Carr Reinforcements (Stockport, England) wove the first carbon fibre fabric in the world.
2007: The RepRap Project, the first self-replicating 3D Printer, developed at the University of Bath.
Household appliances
13th century: Magnifying glass defined by Roger Bacon (c.?1214-c.?1292).
Before 1596: Modern flushing toilet invented by John Harington (1560–1612). The term 'John', used particularly in the US, is generally accepted as a direct reference to its inventor.
1733: Perambulator developed by William Kent (c. 1685–1748).
1780: First mass-produced toothbrush produced by William Addis (1734–1808).
1795: First corkscrew patent granted to the Reverend Samuel Henshall (1764/5–1807).
1810: Tin can for food preservation patented by merchant Peter Durand (dates not known).
1818: Modern fire extinguisher invented by George William Manby (1765–1854).
1828: Thermosiphon, which forms the basis of most modern central heating systems, invented by Thomas Fowler (1777–1843).
1830: Lawn mower invented by Edwin Beard Budding (1796–1846).
1836: The Daniell cell – a type of electrochemical cell; an element of an electric battery – invented by John Frederic Daniell (1790–1845).
1840: Postage stamp invented by Sir Rowland Hill (1795–1879).
1845: Rubber band patented by inventor Stephen Perry (dates not known).
1878: Incandescent light bulb invented by Joseph Wilson Swan (1828–1914).
1884: Light switch invented by John Henry Holmes (dates not known) in Shieldfield.
1899: Little Nipper Mouse trap invented by James Henry Atkinson (1849–1942).
Late-19th century: Commercially produced electric toaster developed by R. E. B. Crompton (1845–1940).
Late-19th century: Modern pay toilet invented by John Nevil Maskelyne (1839–1917); Maskelyne invented a lock for London toilets, which required a penny to operate, hence the euphemism "spend a penny".
1901: First powered vacuum cleaner invented by Hubert Cecil Booth (1871–1955).
Before 1902: First practical Teasmade designed by clockmaker Albert E. Richardson (dates not known) of Ashton-under-Lyne.
Before 1920: Folding carton invented by Charles Henry Foyle (died 1948).
1924: First modern dishwasher invented by William Howard Livens (1889–1964)
1955: First fully automatic electric kettle produced by manufacturer Russell Hobbs of Failsworth, Greater Manchester.
1963: Lava lamp invented by accountant Edward Craven Walker.
1965: Collapsible baby buggy produced by Owen Finlay Maclaren (1907–1978).
1983: "Bagless" vacuum cleaner invented by James Dyson (born 1947).
Industrial processes
1740: English crucible steel developed by Benjamin Huntsman (1704–1776).
1743: Sheffield plate, a layered combination of silver and copper, invented by Thomas Boulsover (1705–1788).
1746: The lead chamber process, for producing sulfuric acid in large quantities, invented by John Roebuck (1718–1794).
c. 1760-c. 1840: Pioneers of the Industrial Revolution – Isambard Kingdom Brunel (1806–1859); Abraham Darby I (1678–1717); Abraham Darby II (1711–1763); Abraham Darby III (1750–1789); Robert Forester Mushet (1811–1891).
1769: The water frame, a water-powered spinning frame, invented by Richard Arkwright (1732–1792).
c. 1770: Coade stone, a high quality stoneware, created by Eleanor Coade (1733–1821).
1784–1789: Power loom developed by Edmund Cartwright (1743–1823).
1795: Hydraulic press invented by Joseph Bramah (1748–1814).
1820: The Rubber Masticator, a machine for recycling rubber, invented by Thomas Hancock (1786–1865).
1824: Portland cement patented by Joseph Aspdin (1778–1855).
1840: Electroplating process patented by George Elkington (1801–1865).
1843: Vulcanisation of rubber, a process for making natural rubber more durable, patented by Thomas Hancock (1786–1865).
1850: The Parkes process, for removing silver from lead during the production of bullion, invented by Alexander Parkes (1813–1890).
1850–1855: Steel production Bessemer process developed by Henry Bessemer (1813–1898).
1862: First man-made plastic – Nitrocellulose, branded Parkesine – invented by Alexander Parkes (1813–1890).
1912: Stainless steel invented by Harry Brearley (1871–1948).
1933: First industrially practical polythene discovered by accident in 1933 by Eric Fawcett and Reginald Gibson in Northwich.
1952: The float glass process, for the manufacture of high-quality flat glass, invented by Alastair Pilkington (1920–1995).
1950s: The Wilson Yarn Clearer developed by inventor Peter Wilson (dates not known).
2001: Self-cleaning glass is developed by Pilkington.
Medicine
Anglo-Saxon times: The earliest pharmacopoeia in English (Cotton Vitellius, MS C. iii).
1628: First correct description of circulation of the blood in De Motu Cordis by William Harvey (1578–1657).
18th century: Invention of surgical forceps attributed to Stephen Hales (1677–1761).
c. 1711: First blood pressure measurement and first cardiac catheterisation by Stephen Hales (1677–1761).
1763: Aspirin's active ingredient discovered by Edward Stone (1702–1768).
1770s: Isolation of fibrin, a key protein in the blood coagulation process; investigation of the structure of the lymphatic system; and description of red blood cells by surgeon William Hewson (1739–1774), so-called "father of haematology".
1775: First demonstration that a cancer may be caused by an environmental carcinogen by Percivall Pott (1714–1788), also a founding father of orthopedy.
1794: Colour blindness first described in a paper titled "Extraordinary facts relating to the vision of colours" by John Dalton (1766–1844).
1798: Smallpox vaccine, the first successful vaccine to be developed, invented by Edward Jenner (1749–1823); in so doing, Jenner is said to have "saved more lives [. . .] than were lost in all the wars of mankind since the beginning of recorded history."
1800: Anaesthetic properties of nitrous oxide (entonox/"laughing gas") discovered by Humphry Davy (1778–1829).
1817: First description of (what would come to be called) Parkinson's disease in "An Essay on the Shaking Palsy" by James Parkinson (1755–1824).
1818 or 1829: First successful blood transfusion performed by James Blundell (1791–1878).
1819: First accurate description of hay fever by John Bostock (1773–1846).
1847: Ophthalmoscope conceived by Charles Babbage (1791–1871).
1850s: Location of the source of cholera by pioneer of anaesthesia and "father of epidemiology" John Snow (1813–1858).
1850s: General anaesthetic pioneered by Englishman John Snow (1813–1858) and Scotsman James Young Simpson.
1850s onwards: Treatment of epilepsy pioneered by Edward Henry Sieveking (1816–1904).
1858: First publication of Gray's Anatomy, widely regarded as the first complete human-anatomy textbook, by Henry Gray (1827–1861).
1860 onwards: Modern nursing pioneered by Florence Nightingale (1820–1910).
1867: Antisepsis in surgery invented by Joseph Lister (1827–1912).
1867: Clinical thermometer devised by Thomas Clifford Allbutt (1836–1925).
1887: First practical ECG machine invented by Augustus Waller of St Mary's Hospital in London.
1898: The mosquito identified as the carrier of malaria by Sir Ronald Ross (1857–1932).
1901: Amino acid Tryptophan discovered by Frederick Gowland Hopkins (1861–1947).
1902: First typhoid vaccine developed by Almroth Wright (1861–1947).
1912: Vitamins discovered by Frederick Gowland Hopkins (1861–1947).
1915: Acetylcholine (ACh) identified by Sir Henry Hallett Dale (1875–1968) for its action on heart tissue.
1937 onwards: Protein crystallography developed by Dorothy Crowfoot Hodgkin (1910–1994); Hodgkin solved the structures of cholesterol (1937), penicillin (1946), and vitamin B12 (1956), for which she was awarded the Nobel Prize in Chemistry in 1964; in 1969, she succeeded in solving the structure of insulin, on which she worked for over thirty years.
1937: Discovery of the Citric acid cycle ("Krebs Cycle") by German-born (naturalised) British physician and biochemist Hans Adolf Krebs (1900–1981) at the University of Sheffield.
1940s: Groundbreaking research on the use of penicillin in the treatment of venereal disease carried out in London by Jack Suchet (1908–2001) with Scottish scientist Sir Alexander Fleming.
1941: Crucial first steps in the mass production of penicillin made by Norman Heatley (1911–2004).
1949: Diagnostic ultrasound first used to assess the thickness of bowel tissue by English-born physicist John J. Wild (1914–2009), so-called "father of medical ultrasound".
1949–1950: Artificial intraocular lens transplant surgery for cataract patients developed by Harold Ridley (1906–2001).
Late 1950s: Peak Flow Meter invented by Martin Wright (1912–2001), also the creator of the Syringe Driver.
1960 onwards: The hip replacement operation (in which a stainless steel stem and 22mm head fit into a polymer socket and both parts are fixed into position by PMMA cement) pioneered by John Charnley (1911–1982).
1960s: First use of sodium cromoglycate for asthma prophylaxis associated with Roger Altounyan (1922–1987).
1967 onwards: Computed Tomography and first commercial CT scanner invented by Sir Godfrey Hounsfield (1919–2004) in Hayes, Middlesex, at EMI Central Research Laboratories.
1969–1978: Development of in vitro fertilisation (IVF) by Patrick Christopher Steptoe (1913–1988) and Robert Geoffrey Edwards (1925–2013).
Late 1970s: Echo-planar imaging (EPI) technique, a contribution to the development of magnetic resonance imaging (MRI), developed by Sir Peter Mansfield (born 1933).
1980: Potential of hematopoietic stem cell transplantation in treating a wide range of genetic diseases, among other breakthroughs, discovered by John Raymond Hobbs (1929–2008).
1981: Discovery of how to culture embryonic stem cells credited to England-born biologist Martin Evans (born 1941).
1993: Viagra (a.k.a. Sildenafil – compound UK-92,480) synthesised by a group of pharmaceutical chemists working at Pfizer's Sandwich, Kent research facility in England. The press identified Peter Dunn and Albert Wood as the inventors of the drug; only Andrew Bell, David Brown and Nicholas Terrett are listed on the original composition of matter patent.
2009: First baby genetically selected to be free of a breast cancer born at University College Hospital.
2014: The "Mom incubator", an Inflatable incubator for reducing mortality rates in premature babies, invented by James Roberts.
Military
1718: The Puckle Gun or Defence Gun, a multi-shot gun mounted on a tripod, invented by James Puckle (1667–1724).
1784: Shrapnel shell, an anti-personnel artillery munition, developed by Henry Shrapnel (1761–1842).
1804: The Congreve rocket, a weapon, invented by Sir William Congreve (1772–1828).
1830s: The safety fuse invented by William Bickford (1774–1834).
1854: The Whitworth rifle, often called the "sharpshooter" because of its accuracy and considered one of the earliest examples of a sniper rifle, invented by Sir Joseph Whitworth (1803–1887).
1854–1857: The Armstrong Gun, a uniquely designed field and heavy gun, developed by Sir William Armstrong (1810–1900).
1866: First effective self-propelled naval torpedo invented by Robert Whitehead (1823–1905).
1875: The side by side boxlock action, commonly used in double barreled shotguns, invented by William Anson and John Deeley for the Westley Richards company of Birmingham.
1884: The Maxim gun, the first self-powered machine gun invented by Sir Hiram Maxim (1840–1916); American-born, Maxim moved from the United States to England in 1881, becoming a (naturalised) British subject. The Maxim gun was financed by Albert Vickers of Vickers Limited and produced in Hatton Garden, London. It has been called "the weapon most associated with British imperial conquest".
1891: Cordite, first of the "smokeless powders" which came into general use towards the end of the 19th century, invented by Englishman Frederick Abel (1827–1902) and Scot James Dewar.
1901: Bullpup firearm configuration first used in the Thorneycroft carbine rifle, developed by an English gunsmith as patent No. 14,622 of July 18, 1901.
1906: The Dreadnought battleship, the predominant type of battleship in the early 20th century, credited to First Sea Lord Admiral John "Jackie" Fisher (1841–1920).
1914: First operational fighter aircraft, the Vickers F.B.5 (a.k.a. th "Gunbus"), developed from a design by Archibald Low (1888–1956).
1916: The tank developed and first used in combat by the British during World War I as a means to break the deadlock of trench warfare. Key co-inventors include Major Walter Gordon Wilson (1874–1957) and Sir William Tritton (1875–1946).
1916: The first effective depth charge, an anti-submarine warfare weapon, developed from a design by Herbert Taylor at the RN Torpedo and Mine School, HMS Vernon.
1916: The Livens Projector, a weapon, created by William Howard Livens (1889–1964).
1917: Dazzle camouflage created by Norman Wilkinson (1878–1971).
1917: ASDIC active sonar, the first practical underwater active sound-detection apparatus, developed by Canadian physicist Robert William Boyle and English physicist Albert Beaumont Wood (1890–1964).
1940s: High-explosive squash head, a type of ammunition, invented by Sir Charles Dennistoun Burney (1888–1968).
1941: The Fairbairn-Sykes fighting knife invented by William Ewart Fairbairn (1885–1960) and Eric A. Sykes (1883–1945).
1941–1942: The Bailey bridge – a type of portable, pre-fabricated, truss bridge – invented by Donald Bailey (1901–1985). Field Marshal Montgomery emphasised the importance of the Bailey bridge in Britain winning the war.
1943: The bouncing bomb invented by Barnes Wallis (1887–1979).
1943: H2S radar (airborne radar to aid bomb targeting) invented by Alan Blumlein (1903–1942). Blumlein died in a plane crash during a secret trial of the H2S system.
1950: The steam catapult, a device used to launch aircraft from aircraft carriers, developed by Commander Colin C. Mitchell RNR.
1960s: Chobham armour, a type of vehicle armour, developed by a team headed by Gilbert Harvey of the FVRDE at the tank research centre on Chobham Common, Surrey.
1960: Harrier Jump Jet developed by Hawker Aircraft of Kingston upon Thames following an approach by the Bristol Aeroplane Company in 1957.
Late 1970s: Stun grenades developed by the British Army's SAS.
Mining
1712: The Newcomen Engine invented by Thomas Newcomen (1664–1729); from c. 1705 Newcomen was first to use a Beam engine to pump water from mines.
1815: The Davy lamp, a safety lamp, invented by Humphry Davy (1778–1829).
1815: The Geordie lamp, a safety lamp, invented by George Stephenson (1781–1848).
Musical instruments
1695: Northumbrian smallpipes (a.k.a. Northumbrian pipes) associated with Northumberland and Tyne and Wear.
1711: The Tuning fork invented by John Shore (c. 1662–1752).
1798: The harp lute invented by Edward Light (c. 1747-c. 1832); Light patented the instrument in 1816.
Early 19th century: The Irish flute is not an instrument indigenous to Ireland; a key figure in its development was English inventor and flautist Charles Nicholson (1775–1810).
1829: The concertina invented by Charles Wheatstone (1802–1875).
Early 20th century: The theatre organ developed by Robert Hope-Jones (1859–1914).
1870: Carbon microphone, invented by David Edward Hughes.
1968: The logical bassoon, an electronically controlled version of the bassoon, developed by Giles Brindley (born 1926).
Photography
Before 1800: Method of copying images chemically to permanent media devised by Thomas Wedgwood (1771–1805).
1838: The Stereoscope, a device for displaying three-dimensional images, invented by Charles Wheatstone (1802–1875).
1840: Calotype or Talbotype invented by William Fox Talbot (1800–1877).
1850s: The Collodion process, an early photographic process, invented by Frederick Scott Archer (1813–1857).
1850s: The Ambrotype invented by Frederick Scott Archer (1813–1857) and Peter Wickens Fry (1795–1860).
1861: The Collodion-albumen process, an early dry plate process, invented by Joseph Sidebotham (father of Joseph Watson Sidebotham).
1871: The dry plate process, the first economically successful and durable photographic medium, invented by Richard Leach Maddox (1816–1902).
1878: The Horse in Motion or Sallie Gardner at a Gallop, a precursor to the development of motion pictures, created by Eadweard Muybridge (1830–1904).
1879: The Zoopraxiscope, which may be considered the first movie projector, created by Eadweard Muybridge (1830–1904).
1880s: Method of intensifying plates with mercuric iodide devised by B. J. Edwards (1838–1914); Edwards pioneered also the construction and design of instantaneous shutters.
1887: Celluloid motion pictures created by William Friese-Greene (1855–1921).
1906: Kinemacolor, the first successful colour motion picture process, invented by George Albert Smith (1864–1959).
Publishing firsts
1475: First book printed in the English language, Recuyell of the Historyes of Troye, by William Caxton (c. 1422–c. 1491); eighteen copies survive.
1534: Cambridge University Press granted letters patent by Henry VIII; continuous operation since makes it the world's oldest publisher and printer.
1535: First complete printed translation of the Bible into English produced by Myles Coverdale (1488–1569).
1665: Philosophical Transactions, the first journal exclusively devoted to science, established by the Royal Society of London; it is also the world's longest-running scientific journal.
British Raj period: the first definite map of India drawn by English cartographers.
Mid-19th century: First noted journal club by English surgeon Sir James Paget (1814–1899); recalling in his memoirs time spent at St. Bartholomew's Hospital in London, Paget describes "a kind of club [. . .] a small room over a baker's shop near the Hospital-gate where we could sit and read the journals."
1893: Benjamin Daydon Jackson prepares the first volume of Index Kewensis, first publication aiming to register all botanical names for seed plants at the rank of species and genera.
Science
Physics
1600: Recognition that the earth was a giant magnet, by William Gilbert (1544–1603) in his six-book work De Magnete; De Magnete was known all over Europe, and was almost certainly an influence on Galileo.
1660: Hooke's Law (equation describing elasticity) proposed by Robert Hooke (1635–1703).
1666–1675: Theories on optics proposed by Sir Isaac Newton (1642–1726/7); Newton published Opticks in 1704.
1687: Law of universal gravitation formulated in the Principia by Sir Isaac Newton (1642–1726/7).
1687: Newton's laws of motion formulated in the Principia.
1800: Infrared radiation discovered by Sir William Herschel (1738–1822).
1802: Theory on physiological basis of colour vision proposed by Thomas Young (1773–1829).
1803–1807: Evidence for a wave theory of light discovered by Thomas Young (1773–1829).
1823: Electromagnet invented by William Sturgeon (1783–1850).
1831: Discovery that electric current could be generated by altering magnetic fields (the principle underlying modern power generation) by Michael Faraday (1791–1867).
1845: Proposition that light and electromagnetism are related by Michael Faraday (1791–1867).
1845–1847: Demonstration that electric circuits obey the law of the conservation of energy and that electricity is a form of energy (First Law of Thermodynamics) by James Joule (1818–1889); the unit of energy the Joule is named after him.
1871 and 1885: Discovery of the phenomenon Rayleigh scattering (which can be used to explain why the sky is blue), and prediction of the existence of surface waves by John Strutt, 3rd Baron Rayleigh (1842–1919).
1897: Discovery of the electron by J. J. Thomson (1856–1940).
1911: Discovery of the Rutherford model of the Atom by Ernest Rutherford (1871–1937).
1912: Invention of the mass spectrometer by J. J. Thomson (1856–1940).
1912: Bragg's law and the field of X-ray crystallography, an important tool for elucidating the crystal structure of substances, discovered by William Henry Bragg (1862–1942) and William Lawrence Bragg (1890–1971).
1913: Discovery of isotopes by J. J. Thomson (1856–1940).
1917: Discovery of the Proton by Ernest Rutherford (1871–1937).
1924: Edward Victor Appleton awarded Nobel Prize in Physics in 1947 for proving the existence of the ionosphere during experiments carried out in 1924.
1928: Existence of antimatter predicted by Paul Dirac (1902–1984); Dirac made major contributions to the development of quantum mechanics.
1932: Splitting the atom, a fully artificial nuclear reaction and nuclear transmutation, first achieved by English physicist John Cockcroft (1897–1967) and Ireland's Ernest Walton.
1932: Discovery of the Neutron by James Chadwick (1891–1974).
1935: Possibility of Radar first proven in the "Daventry experiment" by Englishman Arnold Frederic Wilkins (1907–1985) and Scot Robert Watson-Watt.
1947: Holography invented in Rugby, England by Hungarian-British Dennis Gabor (1900–1979; fled from Nazi Germany in 1933). The medium was improved by Nicholas J. Phillips (1933–2009), who made it possible to record multi-colour reflection holograms.
1947: Discovery of the pion (pi-meson) by Cecil Frank Powell (1903–1969).
1964: The Higgs boson, an elementary particle implied by the Higgs field, proposed by Peter Higgs (born 1929) and others to explain why fundamental particles (which are theoretically weightless) might have acquired mass after their formation in the Big Bang.
1974: Hawking radiation predicted by Stephen Hawking (1942–2018).
Chemistry
Anglo-Saxon times: Anglo-Saxon goldsmiths used a process similar to cementation; as evidenced by the Staffordshire hoard.
1665: Correct theory of combustion first outlined in Micrographia by Robert Hooke (1635–1703); Hooke observed that something (known now as oxygen) is taken from the air and that in its absence combustion quickly ceases, however much heat is applied.
1766: Hydrogen discovered by Henry Cavendish (1731–1810); Cavendish described it as a colourless, odourless gas that burns and can form an explosive mixture with air.
1775: Oxygen discovered by Joseph Priestley (1733–1804); Priestley called it "dephlogisticated air".
1791: William Gregor (25 December 1761 – 11 June 1817) discovered the elemental metal titanium.
1801: Charles Hatchett FRS (2 January 1765 – 10 March 1847[1]) discovered the element niobium.
1803: William Hyde Wollaston PRS (6 August 1766 – 22 December 1828) discovered the chemical element rhodium.
1803: William Hyde Wollaston PRS (6 August 1766 – 22 December 1828) discovered the chemical element palladium.
1803: Smithson Tennant FRS (30 November 1761 – 22 February 1815) discovered the element iridium.
1803: Smithson Tennant FRS (30 November 1761 – 22 February 1815) discovered the element osmium.
1803: Modern atomic theory developed by John Dalton (1766–1844). See also Dalton's law and Law of multiple proportions; Dalton is considered the father of modern chemistry.
1807: Sodium isolated by Sir Humphry Davy (1778–1829).
1807: Potassium isolated by Sir Humphry Davy (1778–1829).
1808: Calcium isolated by Sir Humphry Davy (1778–1829).
1808: Strontium isolated by Sir Humphry Davy (1778–1829).
1808: Barium isolated by Sir Humphry Davy (1778–1829).
1808: Magnesium isolated by Sir Humphry Davy (1778–1829).
1808: Boron isolated by Sir Humphry Davy (1778–1829).
1810: Elemental nature of Chlorine discovered by Sir Humphry Davy (1778–1829).
1813: Elemental nature of Iodine discovered by Sir Humphry Davy (1778–1829).
1825: Benzene, the first known aromatic hydrocarbon, isolated and identified by Michael Faraday (1791–1867).
1861: Thallium discovered by William Crookes (1832–1919).
1865: Periodic Table devised by John Newlands (1837–1898); his Law of Octaves was a precursor to the Periodic Law.
1868: Helium discovered in the sun (via spectroscopy) by Norman Lockyer (1836–1920); not until ten years later was it found on earth.
1868: Synthesis of coumarin (one of the first synthetic perfumes), and cinnamic acid via the Perkin reaction by William Henry Perkin (1838–1907).
1893: The Weston cell developed by England-born chemist Edward Weston (1850–1936).
1894: Argon discovered by English physicist John Strutt, 3rd Baron Rayleigh (1842–1919) and Scot William Ramsay.
1898: Morris Travers was an English chemist who with scot Sir William Ramsay discovered xenon, neon and krypton.
1901: Silicone discovered and named by Frederic Kipping (1863–1949); according to the nomenclature of modern chemistry, silicone is no longer the correct term, but it remains in common usage.
1913: Concept of atomic number introduced by Henry Moseley (1887–1915) in order to fix the inadequacies of Mendeleev's periodic table, which had been based on atomic weight. Isaac Asimov wrote, "In view of what he [Moseley] might still have accomplished … his death might well have been the most costly single death of the War to mankind generally."
1913: Existence of isotopes first proposed by Frederick Soddy (1877–1956).
1940s / 1950s: Partition chromatography developed by Richard Laurence Millington Synge (1914–1994) and Archer J.P. Martin (1910–2002).
1950: VX (nerve agent) invented by Ranajit Ghosh at Porton Down, VX (nerve agent) is the world's most deadly chemical compounds. It only takes 10 milligrams to become a fatal dose.
1952: Structure of ferrocene discovered by Geoffrey Wilkinson (1921–1996) and others.
1959: First practical hydrogen–oxygen fuel cell developed by Francis Thomas Bacon (1904–1992).
1962: First noble gas compound, xenon hexafluoroplatinate, prepared by Neil Bartlett (1932–2008).
1985: Buckminsterfullerene discovered by Sir Harry Kroto (born 1939).
Biology
1665: Cell biology originated by Robert Hooke (1635–1703), who discovered the first cells in the course of describing the microscopic compartments within cork.
Early 19th century: the first recognition of what fossils were by Mary Anning.
1839: The identification and discovery of 150 mosses, lichens, liverworts, flowering plants and algae on the Kerguelen Islands by botanist Joseph Dalton Hooker. He later said of his gatherings "many of my best little lichens were gathered by hammering out the turfs or sitting on them till they thawed."
1855: The discovery of the first coal ball by Joseph Dalton Hooker who later on with partner William Binney made the first scientific description of coal balls.
1859: Theories of evolution by natural selection and sexual selection set out in On the Origin of Species by Charles Darwin (1809–1882).
1883: The practice of Eugenics developed by Sir Francis Galton (1822–1911), applying his half-cousin Charles Darwin's theory of evolution to humans.
1953: Double-helix structure of DNA determined by Englishman Francis Crick (1906–2004) and American James Watson. Crick was a pioneer in the field of molecular biology.
1958: the first cloning of an animal, a frog using intact nuclei from the somatic cells of a Xenopus tadpole by Sir John Gurdon.
1950 onward: the pioneering of the use of Xenopus eggs to translate microinjected messenger RNA molecules by Sir John Gurdon and fellow researchers, a technique which has been widely used to identify the proteins encoded and to study their function.
1960 onwards: Pioneering observation-based research into the behaviour of chimpanzees (our closest relatives in the animal kingdom) conducted by Jane Goodall (born 1934).
1977: DNA sequencing by chain termination developed by Frederick Sanger (1918–2013). Sanger won the Nobel Prize for Chemistry twice.
1977: Discovery of introns in eukaryotic DNA and the mechanism of gene-splicing by Richard J. Roberts (born 1943).
1996: Dolly the Sheep born as a result of Nuclear transfer, a form of cloning put into practice by Ian Wilmut (born 1944) and Keith Campbell (1954–2012).
2016: Scientists at the British bio-tech company Oxitec, in an attempt to stop the spread of dengue fever genetically engineer a 'sudden death' mosquito which after mating successfully with a wild female, any offspring produced will not survive to adulthood and the lethal gene is passed on from the female to any male they mate with and the cycle continues. 3,019,000 mosquitos were released on the Grand Cayman Islands and after three months 80% of the population of mosquitos in the target area had vanished.
Mathematics and statistics
1630–1632: The slide rule invented by William Oughtred (1574–1660), developing on work by Edmund Gunter (1581–1626) and Edmund Wingate (1596–1656).
1631: The "x" symbol for multiplication and the abbreviations "sin" and "cos" for the sine and cosine functions devised by William Oughtred (1574–1660) in Clavis Mathematicae (The Key to Mathematics).
1631: The symbols for "is less than" and "is greater than", along with other innovations, devised in the posthumously published algebra text Artis Analyticae Praxis by Thomas Harriot (c. 1560–1621).
1687: Calculus developed by Sir Isaac Newton (1642–1726/7), as set out in his Principia Mathematica.
1763 onwards: Key contributions made to the development of statistics by: Thomas Bayes (c. 1701–1761) (Bayes' theorem); Florence Nightingale (1820–1910) (statistical graphics); Francis Galton (1822–1911) (standard deviation, correlation, regression, questionnaires); Karl Pearson (1857–1936) (correlation coefficient, chi-square); William Gosset (1876–1937) (Student's t-distribution); Ronald Fisher (1890–1962) (Analysis of variance); Frank Yates (1902–1994).
1854: Boolean algebra, the basis for digital logic, proposed by George Boole (1815–1864).
1876: Connection between energy, matter and the curvature of space proposed in s:On the Space-Theory of Matter by William Kingdon Clifford (1845–1879), forty years before Einstein's general theory of relativity.
c. 1880: The Venn diagram devised by John Venn (1834–1923).
1884: Reformulation of Maxwell's equations into the four we know now by Oliver Heaviside (1850–1925).
1901: Discipline of modern mathematical statistics developed by Karl Pearson (1857–1936).
Astronomy
1609: First drawing of the Moon through a telescope by Thomas Harriot (c. 1560 – 1621); Harriot achieved this on 26 July 1609: over four months before Galileo.
1610: Sunspots discovered by Thomas Harriot (c. 1560–1621).
1668: Newtonian telescope invented by Sir Isaac Newton (1642–1727).
1705: Periodicity of Halley's Comet determined by Edmond Halley (1656–1742).
1712–1717: The Planetarium created by French-born Briton John Theophilus Desaguliers (1683–1784).
1758: Achromatic doublet lens patented by John Dollond (1706–1761).
1781: Discovery of the planet Uranus by Sir William Herschel (1738–1822); Herschel also discovered the moons Titania (1787), Oberon (1787), Enceladus (1789), and Mimas (1789).
1783: Existence of black holes first proposed by John Michell (1724–1793); Michell was first to suggest that double stars might be attracted to each other (1767), and he invented the torsion balance (before 1783).
1843: Existence and position of Neptune predicted, using only mathematics, by John Couch Adams (1819–1892).
1845: Nature of spiral galaxies discovered by William Parsons, 3rd Earl of Rosse (1800–1867).
1846: Discovery of Triton by William Lassell (1799–1880); Lassell also discovered the moons Hyperion (1848), Ariel (1851), and Umbriel (1851).
1924: The Eddington limit – the natural limit to the luminosity of stars, or the radiation generated by accretion onto a compact object – discovered by Sir Arthur Stanley Eddington (1882–1944).
1930s–1950s: Important contributions to the development of radio astronomy made by Bernard Lovell (1913–2012).
1946–1954: Pioneering theories of Nucleosynthesis (the formation of chemical elements in stars and supernova) proposed by Sir Fred Hoyle (1915–2001); in 1949, Hoyle coined the term "Big Bang".
1966 onwards: Important contributions to cosmology and (from 1973)quantum gravity made by Stephen Hawking (born 1942), especially in the context of black holes.
1967: Pulsars discovered by English radio astronomer Antony Hewish (born 1924) and one of his graduate students, Northern Irish Jocelyn Bell.
Late 1960s / early 1970s: Aperture synthesis, used for accurate location and imaging of weak radio sources in the field of radio astronomy, developed by Martin Ryle (1918–1984) and Antony Hewish (born 1924).
Geology and meteorology
1802: Nomenclature system for clouds developed by meteorology pioneer Luke Howard (1772–1864).
1815: First geological map of Great Britain created by William Smith (1769–1839); Smith is responsible, as well, for the observation that fossils can be used to work out the relative ages of rocks and strata (Principle of Faunal Succession).
1820: The dew-point hygrometer, an instrument used for measuring the moisture content in the atmosphere, invented by John Frederic Daniell (1790–1845).
1820s: Scientific study of dinosaurs initiated by Gideon Mantell (1790–1852).
1861: First weather map created by Francis Galton (1822–1911).
1880: The Seismograph, for detecting and measuring the strength of earthquakes, invented by John Milne (1850–1913).
1911 onwards: Geochronology pioneered by Arthur Holmes (1890–1965).
1938–1964: The Callendar effect, a theory linking rising carbon dioxide concentrations in the atmosphere to global temperature (Global warming), proposed by Guy Stewart Callendar (1898–1964).
Philosophy of science
c. 1240s: An early framework for the scientific method, based in Aristotelian commentaries, proposed by English statesman, scientist and Christian theologian Robert Grosseteste (c. 1175–1253).
1267: Early form of the scientific method articulated in Opus Majus by Roger Bacon (c. 1214?-c. 1292?).
1620: Baconian method, a forerunner of the scientific method, proposed in the Novum Organum by Sir Francis Bacon (1561–1626).
1892: Scope and method of science proposed in The Grammar of Science by Karl Pearson (1857–1936); the book was a pivotal influence on the young Albert Einstein and contained several ideas that were later to become part of his theories.
Scientific instruments
1630–1632: The slide rule invented by William Oughtred (1574–1660), developing on work by Edmund Gunter (1581–1626) and Edmund Wingate (1596–1656).
1630s: The micrometer invented by William Gascoigne (1612–1644).
1665: Compound microscope with 30x magnification developed by Robert Hooke (1635–1703); Hooke published Micrographia in 1665.
1668: The marine barometer invented by Robert Hooke (1635–1703).
1677: The Coggeshall slide rule, a.k.a. the carpenter's slide rule, invented by Henry Coggeshall (1623–1691).
1763: Triple achromatic lens invented by Peter Dollond (1731–1820).
1784: The Atwood machine, for demonstrating the law of uniformly accelerated motion, invented by George Atwood (1745–1807).
c. 1805: First bench micrometer – the "Lord Chancellor", capable of measuring to one ten-thousandth of an inch – invented by Henry Maudslay (1771–1831), a founding father of machine tool technology.
1833: Wheatstone bridge invented by Samuel Hunter Christie (1784–1865); improved and popularised in 1843 by Charles Wheatstone (1802–1875).
1972: The Sinclair Executive, the world's first small electronic pocket calculator, invented by Sir Clive Sinclair (born 1940).
Sport
Before 1299: Bowls or lawn bowls can be traced to 13th-century England. The world's oldest surviving bowling green is Southampton Old Bowling Green, first used in 1299.
Late 15th century: Rounders developed from an older English game known as stoolball.
Early 16th century: Modern boxing developed from bare-knuckle boxing or prizefighting, a resurfacing of Ancient Greek boxing in England. The first recorded boxing match took place on 6 January 1681 in England, arranged by Christopher Monck, 2nd Duke of Albemarle (1653–1688).
1519: World's oldest sporting competition still running, the Kiplingcotes Derby horse-race, established; it has run annually since without a break.
1530s: Origin of real tennis played with rackets, popularised by Henry VIII.
1598: The earliest definite reference to cricket; the sport may arguably be traced further back to 1301 with written evidence of a game known as creag played by Prince Edward, son of Edward I (Longshanks).
Aunt Sally, early 17th century.
After 1660: Thoroughbred horseracing developed in 17th- and 18th-century England; royal support from Charles II, a keen racegoer and owner, made horse-racing popular with the public.
1673: Oldest non-equine competition in England, the Scorton Arrow archery tournament, established in Scorton, Yorkshire.
1715: Oldest rowing race in the world, Doggett's Coat and Badge established; the race on the River Thames has been held every year since 1715.
1744: Earliest description of baseball in A Little Pretty Pocket-Book by John Newbery (1713–1767); the first recorded game of "Bass-Ball" took place in 1749 in Surrey. William Bray (1736–1832) recorded a game of baseball on Easter Monday, 1755 in Guildford, Surrey; the game is considered to have been taken across the Atlantic by English emigrants.
Early 19th century: Modern field hockey developed in English public schools; the first club was established in 1849 in Blackheath, London.
1820s: Ice hockey, a variant of field hockey, invented by British soldiers based in Canada. British soldiers and emigrants to Canada and the United States played their stick-and-ball games on the winter ice and snow; in 1825, John Franklin (1786–1847) wrote during one of his Arctic expeditions: "The game of hockey played on the ice was the morning sport" on Great Bear Lake.
1823 or 1824: Invention of Rugby football credited to William Webb Ellis (1806–1872).
1850: The format of the modern Olympic Games inspired by William Penny Brookes (1809–1895); see also the Cotswold Olimpick Games.
c. 1850: A bowling machine for cricket named the Catapulta (a predecessor of the pitching machine) invented by Nicholas "Felix" Wanostrocht (1804–1876).
1857: Sheffield F.C. formed by former public school pupils, making it the world's first and oldest Association football club, as acknowledged by The Football Association and FIFA.
1867: Coconut shy in Kingston, Surrey.
1859–1865: Lawn tennis invented by Harry Gem (1819–1881) and Augurio Perera, a Spanish-born merchant and sportsman based in England.
1874–1875: Snooker invented by the British Army in India.
1874: Formal codification of the rules of modern Polo established by the Hurlingham Polo Association; polo had been introduced to England in 1834 by the 10th Hussars at Aldershot, Hants, and in 1862 the first polo club, Calcutta Polo Club, was established by two British soldiers, Captain Robert Stewart and (later Major General) Joe Sherer.
1880 onwards: Modern rock climbing developed by Walter Parry Haskett Smith (1859–1946), so-called "father of rock climbing".
1880s: Table tennis or ping-pong originated in Victorian England as an indoor version of tennis; it was developed and played by the upper class as an after-dinner parlour game.
1888: Tiddlywinks patent application by London bank clerk Joseph Assheton Fincher (1863–1900); tiddlywinks originated as an adult parlour game in Victorian England.
1893–1897: Netball developed from early versions of women's basketball at Madame Österberg's College in England.
1895: Rugby league created with the establishment of the Northern Rugby Football Union (NRFU) as a breakaway faction of England's Rugby Football Union (RFU).
1896: The dartboard-layout used in the game and professional competitive sport of Darts was devised by Lancashire carpenter Brian Gamlin (c. 1852–1903); Gamlin died before he could patent his idea.
1899: Mixed martial art (MMA) Bartitsu invented by Edward William Barton-Wright (1860–1951).
1948: The first Paralympic games competition, originally the Stoke Mandeville Games, created in England by German-born (from 1945 naturalised) British neurologist Ludwig Guttmann (1899–1980).
1954: Sir Roger Bannister (1929–2018) ran the first sub-four-minute mile on 6 May 1954.
1979: First modern bungee jumps made from the Clifton Suspension Bridge in Bristol by members of the Oxford University Dangerous Sports Club.
Transport
Aviation
1799: Concept of the modern aeroplane as a fixed-wing flying machine with separate systems for lift, propulsion, and control set forth by Sir George Cayley (1773–1857); Cayley is one of the most important people in the history of Aeronautics and flight: he is sometimes called the "father of aviation".
1804: First glider to carry a human being aloft designed by Sir George Cayley (1773–1857). Cayley discovered and identified the four aerodynamic forces of flight: weight, lift, drag, and thrust; Modern aeroplane design is based on those discoveries, along with cambered wings which Cayley also discovered.
1837: Pioneering contribution to parachute design made by Robert Cocking (1776–1837); aged 61, Cocking was the first person to be killed in a parachuting accident.
1844: Hale rockets, an improved version of the Congreve rocket design that introduced thrust vectoring, invented by William Hale (1797–1870).
1848: World's first powered flight (of 30 feet) achieved in Chard, Somerset with the Aerial Steam Carriage by John Stringfellow (1799–1883), 55 years before the Wright brothers; Stringfellow and William Samuel Henson (1812–1888) patented their invention in 1842.
Late-19th century: The term "air port" first used – to describe the port city Southampton, where some early flying boats landed.
1929: Turbojet engine single-handedly invented by Sir Frank Whittle (1907–1996).
1949: First commercial jet airliner, the de Havilland Comet, designed, developed and manufactured by de Havilland.
1954: First aircraft capable of supercruise, the English Electric Lightning, designed, developed and manufactured by English Electric.
1959: Aerospace engineer John Hodge (1929–2021) migrated to become part of NASA's Space Task Group, which was responsible for America's manned space programme, Project Mercury.
1960: VTOL (Vertical Take-Off and Landing) aircraft (most famously the Harrier) invented by Gordon Lewis (1924–2010), Ralph Hooper (born 1926), Stanley Hooker (1907–1984) and Sydney Camm (1893–1966); the project developed on ideas by Frenchman Michel Wibault.
1965: Concorde The world's first supersonic commercial aircraft (A joint development between British Airways, Air France) invented by Sir James Hamilton (1923–2012); the project was manufactured by BAC, Sud Aviation. It took Concorde three hours, fifteen minutes to fly from London Heathrow to New York JFK.
Railways
1825: Opening of the Stockton and Darlington Railway, the world's first operational steam passenger railway; it was taken over by the North Eastern Railway in 1863.
1830: Opening of the Liverpool and Manchester Railway, the first inter-city steam-powered railway; the railway was absorbed by the Grand Junction Railway in 1845.
1838: Opening of the first stretch of the Great Western Railway, from London Paddington station to (the original) Maidenhead station, engineered by Isambard Kingdom Brunel (1806–1859).
Locomotives
1802: First full-scale railway steam locomotive built by Richard Trevithick (1771–1833). This built on the endeavours of two other Englishmen, engineer Thomas Savery (c.1650–1715), son of Devon, and the first practical steam engine built in 1712 by Londoner Thomas Newcomen (c.1664–1729). James Watt did not invent the steam engine. Rather Watt, prompted by English backer and manufacturer Matthew Boulton, effected improvements sufficient to make the invention commercial viable.
1812: First commercially viable steam locomotive, the twin cylinder Salamanca, designed and built by Matthew Murray (1765–1826) of Holbeck.
1813: First practical steam locomotive to rely simply on the adhesion of iron wheels on iron rails, Puffing Billy, built by William Hedley (1779–1843).
1814: First successful flanged-wheel adhesion locomotive, the Blücher, built by George Stephenson (1781–1848).
1824: First steam locomotive to carry passengers on a public rail-line, the Locomotion No. 1, built by Robert Stephenson (1803–1859), son of George Stephenson.
1829: Stephenson's Rocket built by George Stephenson (1781–1848) and his son Robert Stephenson (1803–1859); the Rocket was not the first steam locomotive, but it was the first to bring together several innovations to produce the most advanced locomotive of its day.
1829: The Sans Pareil, a less advanced competitor of Stephenson's Rocket, built by Timothy Hackworth (1786–1850).
1829: The Stourbridge Lion, first steam locomotive to be operated in the United States, built by Foster, Rastrick and Company of Stourbridge, Worcestershire, now West Midlands; the manufacturing company was headed by James Foster (1786–1853) and John Urpeth Rastrick (1780–1856).
1835: Der Adler the first steam locomotive in Germany. Built by George & Robert Stephenson in Newcastle.
1923: The Flying Scotsman built to a design by Sir Nigel Gresley (1876–1941); the Flying Scotsman was in 1934 the first steam locomotive to be authenticated at reaching in passenger service.
Other railway developments
1842: The Edmondson railway ticket invented by Thomas Edmondson (1792–1851); British Rail used Edmondson tickets until February 1990.
1852 onwards: Numerous inventions for railways by John Ramsbottom (1814–1897), including: the split piston ring (1852), the Ramsbottom safety valve (1855), the Displacement lubricator (1860), and the water trough (1860).
1863: Opening of the world's oldest underground railway, the London Underground, a.k.a. the Tube; the Tube is the oldest rapid transit system, and it was the first underground railway to operate electric trains.
Late 1940s: Maglev, the use of magnetic levitation to move vehicles without touching the ground, invented by Eric Laithwaite (1921–1997).
1981: The Advanced Passenger Train (APT), an experimental high-speed train that pioneered tilting, introduced by British Rail.
Roads
1804: The seat belt invented by Sir George Cayley (1773–1857).
1808: Tension-spoke wire wheels invented by Sir George Cayley (1773–1857).
1829: First practical steam fire engine invented by John Braithwaite the younger (1797–1880).
1834: The Hansom cab, a type of horse-drawn carriage, invented by Joseph Hansom (1803–1882).
1868: First traffic lights (manually operated and gas-lit) installed outside London's Houses of Parliament; invented by John Peake Knight (1828–1886).
c. 1870: "Ariel", a penny-farthing bicycle, developed by James Starley (1831–1881).
1876: The legal collection of 70,000 thousands seeds from the rubber bearing tree hevea brasiliensis which led to the discovery of the perfect growing climate and locations for rubber trees by Sir Henry Alexander Wickham. Most commercial rubber plants are descended from the seeds he took to Kew Gardens
1884:Thomas Parker claimed to have invented the first electric car.
1885: First commercially successful safety bicycle, "the Rover", developed by John Kemp Starley (1855–1901).
1901: Tarmac patented by Edgar Purnell Hooley (1860–1942).
c. 1902: The invention of the Bowden cable popularly attributed to Sir Frank Bowden (1848–1921), founder and owner of the Raleigh Bicycle Company.
1910: Opening of the oldest existing driving school and first formal driving tuition provider, the British School of Motoring, in Peckham, London.
1922: Horstmann suspension, a coil spring suspension system commonly used on western tanks, invented by Sidney Horstmann (1881–1962).
1926: First automated traffic lights in England deployed in Piccadilly Circus in 1926; outside of London, Wolverhampton was in 1927 the first British town to introduce automated traffic lights.
1934: The Cat's eye, a safety device used in road marking, invented by Percy Shaw (1890–1976).
1934: The Belisha beacon introduced by Leslie Hore-Belisha (1893–1957).
1962: First modern Formula One car, the Lotus 25, designed by Colin Chapman (1928–1982) for Team Lotus; the design incorporated the first fully stressed monocoque chassis to appear in automobile racing.
1985: The Sinclair C5, a one-person battery electric vehicle, invented by Sir Clive Sinclair (born 1940).
1997: World Land Speed Record, 1,228 km/h (763 mph), achieved by ThrustSSC, a jet-propelled car designed and built in England. Project director: Richard Noble (born 1946); designed by Ron Ayers (born 1932), Glynne Bowsher and Jeremy Bliss; piloted by Andy Green (born 1962).
Sea
1578: The first submersible (a small, submarine-like vehicle) of whose construction there exists reliable information designed by Englishman William Bourne (c. 1535–1582) in his book Inventions or Devises; Dutchman Cornelius Drebbel put Bourne's concept into action in 1620.
1691: A diving bell capable of allowing its occupier to remain submerged for extended periods of time, and fitted with a window for the purpose of underwater exploration, designed by Edmund Halley (1656–1742), best known for computing the orbit of Halley's Comet.
c. 1730: The octant invented by English mathematician John Hadley (1682–1744); American optician Thomas Godfrey developed the instrument independently at approximately the same time.
1743: The "Whirling Speculum", a device used to locate the horizon in foggy or misty conditions, invented by John Serson (died 1744); Serson's Speculum can be seen as a precursor to the gyroscope.
1757: First sextant made by John Bird (1709–1776), adapting the principle of Hadley's octant.
1785: The lifeboat invented and patented by Lionel Lukin (1742–1834); William Wouldhave (1751–1821) made a rival claim, but he did not succeed with the practical application of his invention until 1789.
1799: The Transit, a type of sailing vessel with a remarkable turn of speed, patented by Richard Hall Gower (1768–1833).
1835: The screw propeller invented and patented by Francis Pettit Smith (1808–1874).
1843: Launch of the SS Great Britain – the first steam-powered, screw propeller-driven passenger liner with an iron hull; designed by Isambard Kingdom Brunel (1806–1859), it was at the time the largest ship afloat.
1876: Plimsoll Line devised by Samuel Plimsoll (1824–1898).
1878: First commercially successful closed-circuit scuba designed and built by Henry Fleuss (1851–1932), a pioneer in the field of diving equipment.
1878–1879: Two early Victorian submarines, Resurgam I and Resurgam II, designed and built by George Garrett (1852–1902).
1894: The first steam turbine powered steamship, Turbinia (easily the fastest ship in the world at the time), designed by Anglo-Irish engineer Sir Charles Algernon Parsons (1854–1931), and built in Newcastle upon Tyne.
1899–1901: Developments on the hydrofoil by shipbuilder John Isaac Thornycroft (1843–1928), from the concept of Italian Enrico Forlanini.
1912: World's first patent for an underwater echo ranging device (sonar) filed a month after the sinking of the Titanic by Lewis Fry Richardson (1881–1953).
1915: Research into solving the practical problems of submarine-detection by sonar led by Ernest Rutherford (1871–1937).
1955: The hovercraft invented by Sir Christopher Cockerell (1910–1999).
Miscellaneous
1286: First recorded use of the Halifax Gibbet, an early guillotine.
Early 17th century: The closely cut "English" lawn created in the Jacobean epoch of gardening, as the garden and the lawn became places created first as walkways and social areas. The English lawn became a symbol of status of the aristocracy and gentry; it showed that the owner could afford to keep land that was not being used for a building or for food production.
1668: Earliest concept of a metric system proposed by John Wilkins (1614–1672) in An Essay towards a Real Character and a Philosophical Language.
1706: World's first life insurance company, the Amicable Society, founded by William Talbot (1658–1730) and Sir Thomas Allen, 2nd Baronet (c. 1648–1730).
1719: Oldest music-based festival, the Three Choirs Festival, established.
1725: The modern kilt, associated since the 19th century with Scottish culture, arguably invented by English Quaker Thomas Rawlinson (dates not known).
c. 1760: The jigsaw puzzle invented and commercialised by cartographer John Spilsbury (1739–1769).
1767: The carbonated soft drink invented by Joseph Priestley (1733–1804).
1768–1770: The modern circus invented by Philip Astley (1742–1814) in Astley's Amphitheatre on Westminster Bridge Road in Lambeth.
c. 1770–1780: The lorgnette (a pair of spectacles with a handle, used to hold them in place) invented by George Adams the elder (c. 1709–1773) and subsequently illustrated in a work by his son George Adams the younger, An Essay on Vision, briefly explaining the fabric of the eye (1789).
1772: Oldest arts festival established in Norwich 1772.
1787: First glee club founded in Harrow School.
1797: The top hat arguably invented by English haberdasher John Hetherington (dates not known).
1798: Consequences of population growth identified by Thomas Robert Malthus (1766–1834) in An Essay on the Principle of Population.
1798: Oldest police force in continuous operation, the Marine Police Force, formed by English seafarer John Harriott (1745–1817) and Scot Patrick Colquhoun; it merged with the nascentMetropolitan Police Service in 1839.
18th century – 19th century: The history of comics developed with innovations by William Hogarth (1697–1764), James Gillray (1756/57–1815), George Cruikshank (1792–1878) and others. The Glasgow Looking Glass (1826), arguably the first comic strip. William Heath was its principal strip illustrator.
1811: The graphic telescope, a drawing aid with the power of a telescope, invented by water-colour painter Cornelius Varley (1781–1873).
1821: World's first modern nature reserve established by naturalist and explorer Charles Waterton (1782–1965); Waterton was described by David Attenborough as "one of the first people anywhere to recognise not only that the natural world was of great importance but that it needed protection as humanity made more and more demands on it".
1824: Rubber balloon invented by Michael Faraday (1791–1867) during experiments with gases.
1824: First animal welfare society, the RSPCA, founded by a group of reformers including William Wilberforce.
1826: First effective friction match invented by John Walker (1781–1859).
1829: Metropolitan Police Force founded by Home Secretary Sir Robert Peel; by 1857 all cities in the UK were obliged to form their own police forces.
1837 Egg-free custard by Alfred Bird
1840: Stamp collecting initiated by zoologist John Edward Gray (1800–1875); on 1 May 1840, the day the Penny Black first went on sale, Gray bought several with the intent to save them.
1844: The Rochdale Society of Equitable Pioneers founded in Lancashire. The Rochdale Principles are the foundation for the co-operative principles on which co-ops around the world operate to this day.
1844: YMCA (Young Men's Christian Association) founded in London by Sir George Williams (1821–1905), with the aim of putting Christian values into practice.
1846: The Christmas cracker invented by London confectioner Thomas J. Smith by wrapping a bon-bon in a twist of coloured paper, adding a love note, a paper hat and a banger mechanism. This new idea took off and the bon-bon was eventually replaced by a small toy or novelty.
1849: Bowler hat designed by London hat-makers Thomas and William Bowler.
1851: Prime meridian established at Greenwich by Sir George Biddell Airy (1801–1892), Astronomer Royal from 1835 to 1881; Airy's line, the fourth Greenwich Meridian, became the definitive, internationally recognised line in 1884.
1851: Revolutionary modular, prefabricated design, and use of glass utilised in the building of the Crystal Palace of the Great Exhibition by Joseph Paxton (1803–1865); after the exhibition, the Crystal Palace was moved to Sydenham where it was destroyed in a fire in 1936.
1851: Steel-ribbed umbrella developed by Samuel Fox (1815–1887).
1860: Linoleum invented by Frederick Walton (1834–1928).
1865: The Salvation Army, a Christian denominational church and international charitable organisation, founded by Methodist minister William Booth (1829–1912).
1866: The introduction, planting, cultivation and manufacturing of Ceylon tea in the British Crown colony of Ceylon, now Sri Lanka. Sir Arthur Conan Doyle said of the planting efforts "the tea fields of Ceylon were as true a monument to courage as the lions of Waterloo" and called it "one of the greatest commercial victories which pluck and ingenuity ever won."
1868: Erection of first mounted dinosaur skeleton, Hadrosaurus Foulkii and introduction of the universal standard for all future dinosaur displays by English artist Benjamin Waterhouse Hawkins in concert with Dr Joseph Leidy and Edward Drinker Cope. Displayed at The Academy of Natural Sciences
1870s: One precursor (among others) of the modern gas mask constructed by physicist John Tyndall (1820–1893).
1897: Plasticine invented by art teacher William Harbutt (1844–1921).
1901: Model construction system Meccano invented by Frank Hornby (1863–1936).
1902: First large-scale programme of international scholarships, the Rhodes Scholarship, created by Cecil John Rhodes (1853–1902).
1907: The scout movement created by Lord Baden-Powell (1857–1941), on finding that his 1899 military training manual Aids to Scouting was being used by teachers and youth organisations.
1908: The reserve forest which would become the Kaziranga National Park founded by Lord Curzon of Kedleston to protect the dwindling species of rhinoceros.
1913: The crossword puzzle invented by Liverpool-born Arthur Wynne (1871–1945).
1922: Discovery of Tutankhamun's tomb by Archaeologist and Egyptologist Howard Carter, funded by Lord Carnarvon.
1933: Bayko – a plastic building model construction toy, and one of the earliest plastic toys to be marketed – invented by Charles Plimpton (1893–1948).
1946: Toy building bricks invented and patented (under the name "Kiddicraft") by Hilary (Harry) Fisher Page (1904–1957); The Lego Group acquired Page's patent in 1981.
1949: Oldest literary festival, the Cheltenham Literature Festival, established.
1965: Geometric drawing toy Spirograph developed by Denys Fisher (1918–2002).
See also
List of British innovations and discoveries
List of Welsh inventors
Scottish inventions and discoveries
Timeline of Irish inventions and discoveries
Science in Medieval Western Europe
References
English inventions
Inventions and discoveries
Lists of inventions or discoveries |
4674273 | https://en.wikipedia.org/wiki/RLUG | RLUG | RLUG stands for Romanian Linux Users Group.
RLUG is the largest and oldest Linux community in Romania, formed around 1999, with more than 2000 members subscribed to the main mailing list.
RLUG is an unofficial organization that promotes Linux and other Unix-like operating systems in Romania as well as free and open source software in general.
RLUG offers help for Linux/Unix/OSS users through its wiki, dedicated mailing lists and IRC, and hosts a number of software mirrors.
All services are free and offered by volunteers, as is common among all Linux user groups and other Free Software communities.
The community is sustained by its members (through hardware, financial donations, services) and by two of the main ISPs in Romania iNES and GTS.
References
External links
Official Wiki
Mailing Lists
Linux user groups |
1597742 | https://en.wikipedia.org/wiki/Chouriki%20Sentai%20Ohranger | Chouriki Sentai Ohranger | is a Japanese tokusatsu television series and the 19th installment in the long-running Super Sentai metaseries of superhero programs. It is the second ancient civilization-themed Super Sentai, preceded by Dai Sentai Goggle-V. Its footage was used in the American series, Power Rangers Zeo (the closing credits of Zeo referred to it as "O Rangers").
In May 2016, Shout! Factory announced that they would release "Chouriki Sentai Ohranger: The Complete Series" on DVD in North America in November 2016. Ohranger was released on DVD in North America on November 1, 2016. This is the fourth Super Sentai set to be released in North America. In addition on September 8, 2017, Shout! streamed the series on their website.
Plot
In the year 1999, the Machine Empire of Baranoia, led by Emperor Bacchushund, invades Earth with the intention of wiping out all human life and bringing about machine rule. Chief Counsellor Miura revives super energies that had been born of the lost civilization of Pangaea. Assembling pieces of a stone plate uncovered three years previously, he reveals the secrets of . Enlisting an elite five-man team of the United Airforce's finest pilots, Miura builds a pyramid to generate Tetrahedron power to allow five UAOH officers to transform into the Ohrangers and stop Baranoia's invasion.
Ohrangers
The Ohrangers (オーレンジャー Ōrenjā) are a group of five soldiers from the UA who battle the Machine Empire Baranoia. They were originally normal humans until their body chemistry was slightly altered to utilize Chouriki, a form of energy used by an ancient civilization dating back to the time when all the continents were Pangaea. Their special team attack is the Chouriki Dynamite Attack (超力ダイナマイトアタック Chōriki Dainamaito Atakku), where the Ohrangers perform multiple mid-air flips and change into an energy ball that destroys Machine Beasts. Their surname bears the number of corners (or in Momo and Juri's case, a circle which has no corners and a = which is just two lines) of their visor's shape, except for Goro's whose surname reflects the shape of his visor (which is a star).
Goro Hoshino
Goro Hoshino (星野 吾郎 Hoshino Gorō) is a 25-year-old who fights as Oh Red (オーレッド Ō Reddo). An ace pilot and the team UA Captain, Goro is the first to receive his powers. He is a cool-headed and quick-thinking person, though his stubbornness brings him and his teammates much trouble at times. He is an expert in karate, kendo, and judo. The others call him "captain." Goro also appeared in Gaoranger Vs Super Sentai Along with his fellow Red Rangers from Akarenger to Time Red.
Goro appeared years later in Kaizoku Sentai Gokaiger, where he and his partner Momo created a distraction for the Gokaigers while Goro attempted to negotiate with Basco to gain the location of the Zangyack army. When this fell through, Goro was kidnapped by Basco, but was later rescued by the Gokaiger team. He then grants the Greater Power to the Gokaigers, which allows them to create their GokaiGalleon Buster.
Goro is portrayed by: Masaru Shishido (宍戸 勝 Shishido Masaru).
Shouhei Yokkaichi
Shouhei Yokkaichi (四日市 昌平 Yokkaichi Shōhei) is the 27-year-old second-in-command and a boxer who fights as Oh Green (オーグリーン Ō Gurīn). Shouhei is chosen from the same division as Hoshino. He is cheerful, kind and popular with children but is also serious and disciplined in work, being the oldest. He likes pork ramen and makes delicious gyoza.
As Oh Green, he can attack with Explosive: Mirage Knuckles (爆烈ミラージュナックル Bakuretsu Mirāji Nakkuru).
Shouhei is portrayed by Kunio Masaoka (正岡 邦夫 Masaoka Kunio).
Yuji Mita
Yuji Mita (三田 祐司 Mita Yūji) is a 21-year-old who fights as Oh Blue (オーブルー Ō Burū). A swift person, Yuji is an expert in fencing and gymnastics. His recklessness makes him the most childish member along with his way of speaking. Yuji uses jumps and mid air fighting tactics.
As Oh Blue, he can attack with Crashing: Rolling Bomber (激突ローリングボンバー Gekitotsu Rōringu Bonbā).
Yuji is portrayed by Masashi Goda (合田 雅吏 Gōda Masashi).
Juri Nijou
Juri Nijou, UA Lieutenant (二条 樹里 Nijō Juri) is a 22-year-old who fights as Oh Yellow (オーイエロー Ō Ierō). Juri uses martial arts researched in the United States but she also likes dancing and aerobics, which uses in battle with great results. She loves fashion.
As Oh Yellow, she can attack with Lightspeed: Splash Illusion (光速スプラッシュイリュージョン Kōsoku Supurasshu Iryūjon).
Juri is portrayed by Ayumi Hodaka (穂高 あゆみ Hodaka Ayumi) [Played as Ayumi Aso (麻生 あゆみ Asō Ayumi)].
Momo Maruo
Momo Maruo, UA Lieutenant (丸尾 桃 Maruo Momo) is 20 years old, youngest in the team who uses Chinese boxing and aikido and fights as Oh Pink (オーピンク Ō Pinku). While separated from the others during the Bara Magma incident and losing her Power Brace, Momo is befriended by a German Shepherd named Johnny, referred by the locals as a divine savior, who brought her to safety. With the help of Shouta, Momo finds Johnny with her Power Brace. She took Johnny's death hard as she avenges his death, only to find that he was still alive.
Momo appeared in Kaizoku Sentai Gokaiger and led the Gokaigers to believe she had in her possession the Greater Power of the Ohrangers, and would give it to the Gokaigers freely if they did errands for her. This was later revealed to be a ruse to keep the Gokaigers distracted.
As Oh Pink, she can attack with Flashing: Miracle Chi Kung Shot (閃光ミラクル気功弾 Senkō Mirakuru Kikōdan).
Momo is portrayed by Tamao Satō (さとう 珠緒 Satō Tamao) [Played as Tamao (珠緒)].
Allies
Naoyuki Miura
Chief of Staff Naoyuki Miura (三浦 尚之参謀長 Miura Naoyuki-sanbōchō) is the Ohrangers' commander, a dedicated leader who refuses to give up no matter what. An anthropologist and scientist as well, he learned of the ancient Pangaean civilization in 1996 and reversed engineered Chouriki to create the Ohrangers' arsenal and mecha when they are needed. He once defeated a Baranoia Soldier with his bare hands after a UA soldier couldn't do it with a gun.
Miura is portrayed by tokusatsu actor Hiroshi Miyauchi (宮内 洋 Miyauchi Hiroshi).
Dorin
The Dorin (ドリン Dorin, 26–31, 36, 42, 46–48) were the god-like people of Pangaea which one was found sleeping inside King Pyramider. It is discovered that she is an important part of the Chouriki in Earth and Riki is assigned to her care. She was killed by Multiwa but revived in the finale. She has a green pet lizard named Paku (パク Paku).
Dorin is portrayed by Lisa Wada (和田 理沙 Wada Risa).
Gunmajin
Gunmajin (ガンマジン Ganmajin, 37, 38, 40, 41, 44, 48 & Ohranger vs. Kakuranger) is an ancient warrior known for his honor and courage. According to Riki, Gunmajin appeared once 600 million years ago. Imprisoned within the form of a tiny tiki, the only way to unlock his power is by placing a key into his forehead and reciting the magic words "Gunma Gunma Dondoko Gunma" (ガンマガンマ ドンドコガンマ Ganma Ganma Dondoko Ganma). For some reason, the key always ended up in the hands of a child and everyone knew the magic words after hearing them. When awakened, Gunmajin would grant a single wish to his discoverer as long as it didn't mean harm to anyone. There were times in which he simply didn't like the wish and refused to grant it or punished his awakener for lying to him. Gunmajin possessed the Mazin Saber (マジンサーベル Majin Sāberu) through which he focused his power into the Majin One Sword Style (マジン一刀流 Majin Ittō Ryū) of Majin One Sword Fencing (Fire, Lightning, Wind, and Light). His back can act as a shield to defend himself and others (even King Pyramider's beam). In the series finale, he took Acha, Kocha, and Buldont, Jr. to his care. In Ohranger vs. Kakuranger, he is revealed to be terrified of Youkai (monsters fought by the Kakurangers).
Gunmajin is voiced by Akira Kamiya (神谷 明 Kamiya Akira).
Shunpei Kirino
Shunpei Kirino (桐野 俊平 Kirino Shunpei) was a U.A. Lieutenant who died trying to control the Red Puncher.
Kirino was played by Kei Shindachiya (信達谷圭 Shindachiya Kei)
Kotaro Henna
Kotaro Henna (辺名 小太郎 Henna Kotaro, 25, 29–30) is a crazed robot expert who unintentionally causes trouble while wanting to see what makes Baranoia tick.
Arsenal
Power Brace (パワーブレス Pawā Buresu): The Ohrangers' changing device. One piece is worn on each wrist. The right-armed piece has a Storage Crystal (ストレージクリスタル Sutorēji Kurisutaru), the source of the Ohrangers' energy, attached to it. The Storage Crystal is also placed inside of the cockpit of Ohranger Robo in order to pilot it. The left piece also can work as a communication device to contact UA's home base, and to contact the other Rangers. By using the call "Super-Powered Transformation!" (超力変身! Chōriki Henshin!) and connecting their braces, the user transforms into an Ohranger.
Thunderwings (サンダーウィング Sandā Wingu): Air force fighter jets piloted by UAOH members.
Jetter Machines: Five motorcycles that serve as the Ohrangers' personal transportation. Can be stored into the Thunderwings until they are needed.
Red Jetter (レッドジェッター Reddo Jettā): Oh Red's motorcycle.
Green Jetter (グリーンジェッター Gurīn Jettā): Oh Green's motorcycle.
Blue Jetter (ブルージェッター Burū Jettā): Oh Blue's motorcycle.
Yellow Jetter (イエロージェッター Ierō Jettā): Oh Yellow's motorcycle.
Pink Jetter (ピンクジェッター Pinku Jettā): Oh Pink's motorcycle.
King Smasher (キングスマッシャー Kingu Sumasshā): A combination of the Battle Stick & King Blaster.
Battle Stick (バトルスティック Batoru Sutikku): Sword-like weapons wielded by the Ohrangers. Can be used in the "Battle Stick Hurricane" team attack.
King Blaster (キングブラスター Kingu Burasutā): Standard laser pistols wielded by the Ohrangers. They can be combined with the Ohrangers' other various weapons to form even more powerful tools.
Star Riser (スターライザー Sutā Raizā): A powerful sword wielded by Oh Red; special attack is Secret Sword: Super-Powered Riser (秘剣・超力ライザー Hiken - Chōriki Raizā).
Square Crusher (スクエアクラッシャー Sukuea Kurasshā): A pair of powerful hatchets wielded by Oh Green; special attack is Lightning: Super-Powered Crushers (電光・超力クラッシャー Denkō - Chōriki Kurasshā).
Delta Tonfas (デルタトンファ Deruta Tonfa): A pair of mighty bladed-tonfas wielded by Oh Blue; special attack is Lightning: Super-Powered Tonfas (稲妻・超力トンファ Inazuma - Chōriki Tonfa).
Twin Batons (ツインバトン Tsuin Baton): A pair of strong nunchaku wielded by Oh Yellow; special attack is Explosion: Super-Powered Baton (炸裂・超力バトン Sakuretsu - Chōriki Baton).
Circle Defenser (サークルディフェンサー Sākuru Difensā): A power defensive shield and weapon wielded by Oh Pink; special attack is Hurricane: Super-Powered Defenser (疾風・超力ディフェンサー Shippū - Chōriki Difensā).
Big Bang Buster (ビッグバンバスター Biggu Ban Basutā): A special combination of the main five Ohrangers' weapons and a King Smasher. The weapon is used to destroy normal-sized Machine Beasts.
Giant Roller (ジャイアントローラー Jaianto Rōrā): A giant wheel stored inside Sky Phoenix, used by Oh Red to destroy normal-sized Machine Beasts.
Ole Bazooka (オーレバズーカ Ōre Bazūka): A cannon loaded with Hyper Storage Crystals (ハイパーストレージクリスタル Haipā Sutorēji Kurisutaru) that is used to finish off Machine Beasts.
King Brace (キングブレス Kingu Buresu): Riki/King Ranger's transformation device that was served as the basis of the Ohranger's Power Brace. Gold-colored (as opposed to the Ohrangers silver-colored brace) with a Storage Crystal with a "king" (王 Ō) shape. The transformation call is the same as the other Ohrangers.
King Stick (キングスティック Kingu Sutikku): A mighty staff given to Riki by Dorin upon their arrival. Special attacks are King Victory Flash and King Tornado.
Mecha
The mecha constructed by UA are used by the Ohrangers to fight Baranoia.
Chouriki Mobiles
The Chouriki Mobiles (超力モビル Chōriki Mobiru) consists of the following:
Sky Phoenix (スカイフェニックス Sukai Fenikkusu): A red phoenix Chouriki Mobile piloted by Oh Red which uses the Phoenix Beam and carries the Giant Roller which he delivers to battles so Oh Red can use him to destroy normal-sized Machine Beasts. It forms Ohranger Robo's head, back, and Wing Head helmet. Sky Phoenix would later help out in Gaoranger vs. Super Sentai.
Gran Taurus (グランタウラス Guran Taurasu): A green bull Chouriki Mobile piloted by Oh Green which carries Dogu Lander into battle. It is armed with the Taurus Beam and can ram enemies with his Taurus Attack. It forms Ohranger Robo's hips and the Horn Head.
Dash Leon (ダッシュレオン Dasshu Reon): A blue sphinx Chouriki Mobile piloted by Oh Blue which carries Moa Loader into battle. It is armed with the Leon Shot in his forehead and can bite opponents with its Leon Crush attack. It forms Ohranger Robo's torso, arms, and the Graviton Head.
Dogu Lander (ドグランダー Dogu Randā): A yellow dogū Chouriki Mobile piloted by Oh Yellow which is carried into battle by Gran Taurus. It is armed with the dual Dogu Vulcans on its head and side-mounted cannons. It forms Ohranger Robo's left leg and the Vulcan Head.
Moa Loader (モアローダー Moa Rōdā): A pink moai Chouriki Mobile piloted by Oh Pink which is carried into battle by Dash Leon. It is armed with the Moa Cannon on his head and side-mounted missile launchers. It forms Ohranger Robo's right leg and the Cannon Head.
Blocker Robos
The Blocker Robos (ブロッカーロボ Burokkā Robo) are giant robots that are shaped as geometric shapes, respective to their Ohranger pilot's symbol, arriving at the command "Blocker Robos, launch!". Each individual Blocker Robo can wield giant-sized versions of the Battle Stick and King Blaster, and the Ohranger's main weapons.
Red Blocker (レッドブロッカー Reddo Burokkā): The red star-shaped blocker piloted by Oh Red. Armed with the Star Head Attack (スターヘッドアタック Sutā Heddo Atakku) and Red Star Fire.
Green Blocker (グリーンブロッカー Gurīn Burokkā): The green square-shaped blocker piloted by Oh Green. Armed with the Green Body Attack (グリーンボディアタック Gurīn Bodi Atakku) and Green Enclose Net. Capable of operating underwater.
Blue Blocker (ブルーブロッカー Burū Burokkā): The blue triangle-shaped blocker piloted by Oh Blue. Armed with the Blue Kick (ブルーキック Burū Kikku) and Blue Freezing Storm.
Yellow Blocker (イエローブロッカー Ierō Burokkā): The yellow equal sign-shaped blocker piloted by Oh Yellow. Armed with the Yellow Spinning Kick (イエロースプニングキック Ierō Supuningu Kikku) and Yellow Lightning Flash.
Pink Blocker (ピンクブロッカー Pinku Burokkā): The circle-shaped blocker piloted by Oh Pink. Armed with the Pink Skyline Chop (ピンクスカイラインチョップ Pinku Sukairain Choppu) and Pink Impact Wave.
Ohranger Robo
Super-Powered Combination Ohranger Robo (超力合体オーレンジャーロボ Chōriki Gattai Ōrenjā Robo) is the primary mecha formed by the Chouriki Mobiles. The Ohranger Robo is a unique robot because of its interchangeable "Helmets", one for each of the five separate vehicles, with one core member positioning themselves in the center cockpit usually switching his/her place with Oh Red depending on whose vehicle is being used as helmet:
The default Wing Head (ウィングヘッド Wingu Heddo) allows Ohranger Robo to use the Super Crown Sword (スーパークラウンソード Sūpā Kuraun Sōdo) for its Crown Final Crash (クラウンファイナルクラッシュ Kuraun Fainaru Kurasshu), Chouriki Crown Sword Shoot (超力クラウンソードシュート Chōriki Kuraun Sōdo Shūto), and Chouriki Crown Spark Shield (超力クラウンスパークシールド Chōriki Kuraun Supāku Shīrudo) attacks.
The Horn Head (ホーンヘッド Hōn Heddo) allows Ohranger Robo to execute the Taurus Dive headbutt and Super-Powered Taurus Thunder (超力タウラスサンダー Chōriki Taurasu Sandā) attacks.
The Graviton Head (グラビトンヘッド Gurabiton Heddo) allows Ohranger Robo to execute the Leon Punch (レオンパンチ Reon Panchi) and Chouriki Leon Beam (超力レオンビーム Chōriki Reon Bīmu) attacks.
The Vulcan Head (バルカンヘッド Barukan Heddo) allows Ohranger Robo to execute the Chouriki Jump Crash (超力ジャンプクラッシュ Chōriki Janpu Kurasshu), Dogu Sky Kick (ドグスカイキック Dogu Sukai Kikku) and Chouriki Dogu Vulcan (超力ドグバルカン Chouriki Dogu Barukan) attacks.
The Cannon Head (キャノンヘッド Kyanon Heddo) allows Ohranger Robo to execute the Moa Tornado (モアトルネード Moa Torunēdo) and Chouriki Moa Cannon (超力モアキャノン Chōriki Moa Kyanon) attacks.
Red Puncher
Red Puncher (レッドパンチャー Reddo Panchā) was the first mecha built by UA, built prior to Baranoia Invasion in 1997. When Baranoia begins its invasion, the Red Puncher was piloted by a UAOH Lieutenant named Shunpei Kirino. However, Miura still couldn't control the Chouriki energy needed to control it. Shunpei died as Red Puncher is buried under boulders as a result of its berserker rage. When Bara Builder damaged Ohranger Robo, Goro found where Red Puncher was buried and managed to gain control. Arrives by the command "Red Puncher, go!" and can execute the Puncher Gatling (パンチャーガトリング Panchā Gatoringu) and Magna Puncher (マグナパンチャー Maguna Panchā) attacks.
In the finale, Red Puncher is captured by Baranoia, but it is freed by Gunmazin.
Buster Ohranger Robo
Artillery Combination Buster Ohranger Robo (超砲撃合体バスターオーレンジャーロボ Chōhōgeki Gattai Basutā Ōrenjā Robo) is the combination of Ohranger Robo and Red Puncher. Red Puncher is able to assume Combination Mode to form Buster Ohranger Robo, forming the posterior and "Buster Head" (バスターヘッド Basutā Heddo).
It can use the two large shoulder cannons to destroy Machine Beasts with its Big Cannon Burst (ビッグキャノンバースト Biggu Kyanon Bāsuto) finisher.
King Pyramider
King Pyramider (キングピラミッダー Kingu Piramiddā) is King Ranger's mecha, with Riki and Dorin placed in suspended animation until King Pyramider returns to Earth several millennia later. It is a massive Pyramid that can cloak itself until summoned by King Ranger. Its primary attack is a powerful lightning attack that is summoned from the sky called Super Burn Wave (スーパーバーンウェーブ Sūpā Bān Wēbu). King Pyramidder's true power can be seen when it convert to either Carrier Formation (キャリアフォーメーション Kyaria Fōmēshon) or to a gigantic robot, Battle Formation (バトルフォーメーション Batoru Fōmēshon) and is able to carry either the Super-Power Mobiles or Oh Blocker, as well as the Red Puncher. The finisher for the Battle Formation is the Super Legend Beam (スーパーレジェンドビーム Sūpā Rejendo Bīmu) barrage. It transforms into either form whenever the commands "King Pyramider, Carrier Formation!" or "King Pyramider, Battle Formation!" are given. The bases of the left and right sides become the arms, the black section of the front becomes the feet as the legs are revealed, and the back side shows a black pyramid that becomes the head (the lower half of the front side shows the face).
Oh Blocker
Super-Heavy Fusion Oh Blocker (超重合体オーブロッカー Chōjū Gattai Ō Burokkā) is made up of the Blocker Robos. Red Blocker forms Oh Blocker's body. Green Blocker forms Oh Blocker's lower legs. Blue Blocker forms Oh Blocker's waist and upper legs. Yellow Blocker forms Oh Blocker's head, shoulders, and arms. Pink Blocker forms Oh Blocker's feet. Sometimes it can be launched fully combined at the command of "Oh Blocker, launch!".
Oh Blocker wields the Twin Blocken Swords (ツインブロッケンソード Tsuin Burokken Sōdo), with which it performs the Twin Blocken Thunder (ツインブロッケンサンダー Tsuin Burokken Sandā) and the Twin Blocken Crash (ツインブロッケンクラッシュ Tsuin Burokken Kurasshu) finisher. It can also fire a beam from its forehead.
In the finale, Oh Blocker is captured by Baranoia, but it is freed by Gunmazin, and piloted by King Ranger.
Tackle Boy
Tackle Boy (タックルボーイ Takkuru Bōi) is a massive American Football Player robot, yet a dwarf compared to the other robots. It converts to a massive wheel (the back of the wheel is the feet, which reveals the head, and the sides are the arms). Comes when given the command, "Tackle Boy, launch!".
Artificially intelligent, Tackle Boy can be thrown by Oh Blocker for the Dynamite Tackle (ダイナマイトタックル Dainamaito Takkuru) finisher.
Machine Empire Baranoia
The is a cruel race of machines out to conquer Earth. It is led by Bacchushund and possesses a vast army. It had already conquered an entire chain of galaxies before reaching Earth.
Bacchus Wrath
Emperor Bacchus Wrath (皇帝バッカスフンド Kōtei Bakkasuhundo, 1-34, 39–40) is the ruler of Baranoia, built 600 million years ago by an ancient race. He turned to evil and was banished into the depths of space by King Ranger. Seeing himself as a god, Bacchus Wrath believes he has all right to conquer the world and make humans into his slaves. Very violent, tending to malfunction when he goes berserk, Bacchus Wrath does not tolerate failure nor sentimentality in his minions' programming.
In episode 33, it's later revealed Bacchus Wrath had secretly been rebuilding some of his Machine Beasts and managed to harness the infinite energy from the Earth's magma. When his rebuilt Machine Beasts passed through the magma shower they became Super Machine Beasts. Thanks to the Ohrangers' "Trojan Horse" plan with the Blocker Robos, this facility was destroyed preventing any more Super Machine Beasts from being created. In episode 34, he grows in size and power thanks to the power of a space metal dark sword which, according to him, was the only one of its kind in the universe, which he intended to use on the Ohrangers. He is finally destroyed by Oh Blocker. But Bacchus Wrath's head survives and gives the last of his energy to Buldont before shutting down for good.
Bacchus Wrath is voiced by Tōru Ōhira (大平 透 Ōhira Tōru).
Hysteria
Empress Hysteria (皇妃ヒステリア Kōhi Hisuteria, 1-41) is the wife of Bacchushund, usually remaining in the palace devising plans with her husband, though she does go down to Earth by herself at times. She usually carries a metal fan with her and also a gun. She initially despised humans for their feelings, but began to value life after Bomber the Great assumes command. Her body color changed from gold to silver when she gave all her power to her niece Multiwa, and as a result, she aged into Dowager Empress Hysteria (41, 47–48), now using what is left of her late husband's staff as a cane. She eventually self-destructs in order to protect her grandchild, sacrficing herself after the Ohrangers promised they would not harm the child.
Hysteria is voiced by Minori Matsushima (松島みのり, Matsushima Minori).
Buldont
Prince Buldont (皇子ブルドント Ōji Burudonto, 1-40) is basically a robot child and son of Bacchus Wrath and Hysteria. Mischievous and spoiled, he thought of humans as simple toys. He once attempted to direct his own movie, "Century of the Machine Empire", by using humans with no notion that they die from the realism. He can fire lasers from his eyes. After his father's death, Buldont challenges Bomber the Great to a duel for the throne of Baranoia and loses with his body taken away by the exiled Hysteria. However, finding his father's head and receiving his remaining energy, Buldont reconfigured into the adult form of Kaiser Buldont (カイザーブルドント Kaizā Burudonto, 40–48). After returning to Baranoia and destroying Bomber the Great, Buldont regains the leadership of Baranoia. He and Multiwa make themselves grow without Acha and Kocha's help, but is eventually destroyed in the series finale at the hands of King Pyramidder Battle Formation.
Buldont is voiced by Tomokazu Seki (関 智一 Seki Tomokazu).
Multiwa
Princess Multiwa (マルチーワ姫 Maruchīwa Hime, 40–48) is Hysteria's niece and Buldont's cousin, skilled with a bow that can become a sword. While Bacchushund revives Buldont, Hysteria decides to do the same and sends all her 600 million year worth energy to Multiwa who was sleeping on another planet waiting for the day to become the Machine Empress. Receiving the energy from her Aunt and a message of help she came to Earth interrupting the battle between Bomber and the Ohrangers. She aids Buldont in disfiguring Bomber and reprograming him into their slave before sending him to his death. The two marry after Buldont becomes the new ruler of Baranoia. She and Buldont make themselves grow without Acha and Kocha's help. She eventually dies by her husband's side at the hands of King Pyramidder Battle Formation's Super Legend Beam, but not before she bears him a son.
Multiwa is voiced by Miho Yamada (山田 美穂 Yamada Miho).
Buldont Jr.
Buldont Jr. (ブルドントJr. Burudonto Junia, 47–48) is Kaiser Buldont and Princess Multiwa's child. After his birth, his parents are destroyed by King Pyramider and his grandmother Empress Hysteria sacrifices herself after the Ohrangers promise her not to harm the child. The Ohrangers hand Buldont Jr. over to Gunmajin just before he departs to his own planet.
Bomber the Great
Bomber the Great (ボンバー・ザ・グレート Bonbā za Gurēto, 35–41), known as the "Universal Bomb Bastard" (宇宙の爆弾野郎 Uchū no Bakudan Yarō). was just another one of Baranoia's Machine Beasts, yet was able to turn his entire body into a missile. He was exiled after a failed attempt to take over the Baranoia Empire, only to return upon hearing of the death of Bacchushund to try to retake over. This time he set his sights on trying to take over the empty throne. After revising the Baranoian Constitution, Art 12, Bomber challenged Buldont to a duel for the Empire which he won and banished Buldont, proclaiming himself "Bomber the Great the 1st, new Emperor of Baranoia".
At first, being new to the throne, Bomber did his best at leading the Empire, trying to win Hysteria's affections and to kill the Ohrangers in the process, but consistently met with failure in both prospects, exiling Hysteria as a result. Kaiser Buldont returned to take back his birthright and Multiwa took control of Bomber by reprogramming him after they took out his arms, replacing them with a sword and a bunker. Bomber was soon sent on a suicide mission to kill the Ohrangers, but was destroyed by King Pyramider Battle Formation (Oh Blocker) before he could accomplish this. Suddenly, a smaller missile called the Great Missile appeared shortly afterward, to destroy the sun, only to be flung towards the other side of space by Gunmajin to be destroyed for good.
Bomber the Great is voiced by Nobuyuki Hiyama (檜山 修之 Hiyama Nobuyuki).
Servants
Acha
Butler Acha (執事アチャ Shitsuji Acha) is Baranoia's Imperial Family Butler who follows whoever is in command at the time, reading their War Declaration and other proclamations. Took care of young Buldont when in the field, even serving as the producer of his movie. But for all his work, Acha never gets any respect from the imperial family who abuse him at times. When Bomber the Great took over, he simply forgot about Hysteria and served him. When Buldont returned, the same happened, Acha couldn't care less about Bomber. At the end of the series, he turned good and went with Kocha, and Buldont Jr., and Gunmajin back to Gunmajin's planet.
Acha is voiced by Kaneta Kimotsuki (肝付 兼太 Kimotsuki Kaneta).
Kocha
Butler Kocha (執事コチャ Shitsuji Kocha) is a miniature robot who served the family along Acha, always on her partner's shoulder like a pirate captain's parrot. Though not much of a figure due to her size, Kocha can fire beams from her chest. In episode 8, Kocha was outfitted with the Giant System, enabling her to become a hammer for Acha to fling at a Machine Beast, transmitting an enlarging beam into it. At the series finale, Kocha was taken by Gunmajin back to Gunmajin's home planet.
Kocha is voiced by Shinobu Adachi (安達 忍 Adachi Shinobu).
Keris
Machine Beast Tamer Keris (マシン獣使いケリス Mashinjū Tsukai Kerisu, 26–28) is an Officer placed in charge of taming feral Machine Beasts, having her personal dome. When Bacchus Wrath learns of Riki's return, he frantically requests her aid by having her go after Dorin with Yuji, Juri, and Momo attempting to protect her. But after King Ranger arrives and destroys Bara Goblin, Keris assumes her true buxom form as she enlarges and captures King Ranger before taking into her domain. She then uses Bara King to capture little girls in order to make them her new pets after splicing them with animal DNA, with Dorin as the crown jewel in her collection. However, an eagle hinders Keris from capturing Dorin at the cost of its life as the Ohrangers arrive to the girl's aid. Enlarging into her true form, Keris traps Ohranger Robo in an electrified cage until King Pyramider frees it, with OhRed saving the girls before calling in Red Puncher. Keris is then destroyed by King Pyramider Battle Formation.
Keris is played by Akiko Amamatsuri (天祭 揚子 Amamatsuri Akiko), who previously played Rui Senda/Dr. Mazenda Choujuu Sentai Liveman and Gara in Gosei Sentai Dairanger.
Camera Trick
Camera Trick (カメラトリック Kamera Torikku, Movie, 26–27, 39) is a small bird-resembling video camera monster, serving as a recon for Baranoia's forces.
Camera Trick is voiced by Kazunori Arai (新井 一典 Arai Kazunori).
Sei'ichi Kuroda
Sei'ichi Kuroda (新田一郎 Kuroda Sei'ichi, 17–18): A robotics scientist who stole Yuji's Power Brace so he could power his android son Shigeru, whom he built in the image of his dead son. Furthermore, because he cares more for machines than people, he allied himself with Baranoia in a shaky alliance of sorts. But once the actions of Bara Vacuum exposed Shigeru's true nature, Kuroda offer Bacchus Wrath the power of Chouriki in return for a permanent means to keep Shigeru active along with being viceroy of the Baranoian-ruled Earth. Converted into a cyborg, Kuroda captures the Ohrangers one by one while integrating Shigeru into his systems. However, Shigeru manages to break free and free the others. As a result, Bara Ivy is activated to kill them all as Kuroda sacrifices himself so the Ohrangers can get Shigeru out of harm's way.
Machine Army
Barlo Soldiers
Barlo Soldiers (バーロ兵 Bāro Hei): Mass-produced android troops of the Machine Empire of Baranoia. Their movements and general behavior are similar to that of monkeys. They have sticks, that can extend into battle staffs or spears and give off an electric shock, as weapons and their heads can open so they can throw cutter discs or nets from their mouths and energy blasts from their eyes. They pilot the Octopus jet fighters.
Takonpas
Takonpas (タコンパス Takonpasu): Octopus-like Jet fighters which could switch to a "walking" mode, usually piloted by the Barlo Soldiers. Octopus fighters seems to be a direct homage to the Martian Tripods in H.G. Wells' novel, War of the Worlds.
Baracticas
Baracticas (バラクティカ Barakutika): Battleships in the shape of gears, they can hold up to hundreds of Takonpas and are the means of transportation from Baranoia to Earth.
Machine Beasts
Built on the dark side of the Moon by Baranoia, the Machine Beasts (マシン獣 Mashinjū) are the main weapons used for Earth's invasion. There would be different types of Machine Beasts, from mindless weapons of destruction to robots with intelligence or feelings superior to that of humans.
Bara Drill (バラドリル Bara Doriru, 1, 33): The first Machine Beast to be sent to Earth, able to fold its limbs into its body to move. Attacking Shoichi and his group after they are derailed from joining the Ohranger program, Bara Drill is stopped by Goro as he became Oh Red and single-handedly destroys the monster. A second giant Bara Drill was later built into a Super Machine Beast, only to be destroyed by Green Blocker.
Bara Saucer (バラソーサー Bara Sōsā, 2, 33): A giant Machine Beast with tentacle arms that Baranoia sent to destroy Tokyo when their demands are not met. Bara Saucer rampaged until the assembled Ohrangers arrive with OhRed and OhGreen holding the Machine Beast while the others save teacher and her students. Managing to hack the giant apart with their weapons, the Ohrangers finish Bara Saucer off with the Big Bang Buster. A second giant Bara Saucer was built as a Super Machine Beast, only to be destroyed by Blue Blocker.
Bara Vanish (バラバニッシュ Bara Banisshu, 3, 33): A land mine-like Machine Beast sent to Earth in retaliation for the Ohrangers' debut by targeting a boy named Kenichi Matsumoto in a scheme to reverse engineer Chouriki from his memories of a part of a Pangaean slab he and his father found. Equipped with a solar-powered sensor, Bara Vanish is able to turn itself invisible to get an advantage over its opponents in sunlit areas. After Oh Red heavily damages it when about to access Kenichi's memories, disabling its invisibility, Bara Vanish is destroyed by the Big Bang Buster. A second giant Bara Vanish is built to become a Super Machine Beast, but was destroyed before the process was completed.
Bara Crusher (バラクラッシャー Bara Kurasshā, 4, 33): A ravenous Machine Beast brought to Earth by Acha as part of the Human/Machine Beast hybridization plan by using the monster's Metal Amoebas to convert any organic being into a clone of itself. Once on Earth, Bara Crusher ends up in the boiler room of an apartment building before the Ohrangers find it. Though destroyed by the King Smashers, Bara Crusher bit a police officer named Officer Otani just prior to its death, infecting the man as he became the new Bara Crusher and kidnaps his son Hiroshi and four other children to infect them. But the Ohrangers interfere as Officer Otani regains himself and attempts suicide to protect Hiroshi before the Machine Beast manifests again. Learning of Bara Crusher's fear of fire, Oh Red uses it to force the Metal Amoeba out of Officer Otani's body and blast it to bits. A third giant Bara Crusher was built in 33 as a Super Machine Beast, only to be destroyed by Yellow and Pink Blockers.
Bara Cactus 1 & 2 (バラカクタス Bara Kakutasu 1 & 2, 5): Brother cactoid robots who are loyal to each other, able to disburse a pollen that causes humans to become mindless machine creatures able to gain the ability from whatever they eat. Arriving to Earth first, Bara Cactus 1 used his pollen to infect a boy named Takashi before using it on other humans. Once his younger Bara Cactus 2 arrives, the brothers overwhelm the Ohrangers with their power until Bara Cactus 1 notices Takashi being protected by his older brother Tsuyoshi and is confused of seeing the same sibling loyalty he and his brother have. Summoned back to the Baranoia Moon Base, Bara Cactus 1 is scolded for his sentimentality as he is ordered to deal with the Ohrangers alone or Bara Cactus 2 would be destroyed. With Takeshi supporting him, Bara Cactus 1 overpowers the Ohrangers until Tsuyoshi snaps his brother out of it as the two and Yuji uses the Star Riser to run the Machine Beast through. Though Bara Cactus 1 survived both the stab and then the King Smashers, he returned to the Baranoia Moon Base to find that his brother has been dismantled. He was then blown up into pieces by Bacchus Wrath himself, who deems him and his brother as failed creations.
Bara Brain (バラブレイン Bara Burein, 6 & 7): A cunning Machine Beast who is a psychic. Hysteria sends him to target Chief Miura, probing his mind to get him through the daughter of Miura's deceased UA comrade Mitsuko Endo. Separating them, Bara Brain creates Bara Separate as he assumes a sphere form to take control of Mitsuko, capturing Chief Miura and using his guilt to trick him into revealing the location of the UA base. Annoyed, Bara Brain changes plans to threats of killing Mitsuko with his telekinesis if Miura refuses to comply. But OhRed, OhGreen, and OhYellow arrive to save Miura and Mitsuko with Bara Brain in pursuit of the former. Capturing Miura, Bara Brain threatened the Ohrangers to show themselves before noon. But the Ohrangers arrive in the completed Chouriki Mobiles, with Bara Brain piloting a Takompas to battle OhRed in the Sky Phoenix. In the end, Bara Brain died in conjunction with Bara Separate due being linked mentally to it.
Bara Separate (バラセパレート Bara Separēto, 6 & 7): Originally Bara Brain's right eye, the sphere-like Bara Separate enlarges and assimilates surrounding vehicle matter to assume a Machine Beast form, able to radiate destructive lightning and fire powerful blasts from its arms. The top segment of the star on his back could detach and transform into a boomerang of destructive energy. Separate also had the ability to transform into a large metal sphere, making it practically invulnerable to any attack. Being too powerful, the Oh Blue and Oh Pink are forced to use Dash Leon and Mao Loader while the others rescue Miura. But Bara Separate defeats them before pursuing is Miura when Oh Red attempts to get him out of harm's way. Completing the Chouriki Mobiles, the Ohrangers take out the Octopus before forming Ohranger Robo and scrap Bara Separate's sphere before it could rebuilt itself.
Bara Missiler (バラミサイラー Bara Misairā, 8, 33): Bara Missiler comes from the army on the planet Daurora, capable of flying at extreme speeds and could operate in outer space. Summoned to attack the city from a distance with the missiles he fires from his arms and shoulders, he lures the Ohrangers to the open before Oh Red wounds him. But Bara Missiler is made a giant by Acha and Kocha, overpowering Ohranger Robo before firing chains from his torso to drag the mecha and its pilots into the sun's gravitational field. But using the Head system, the Ohrangers escape death with Ohranger Robo destroying Bara Missiler with Crown Final Crash. Another giant Bara Missiler is built as a Super Machine Beast before being destroyed by Red Blocker.
Steampunk (スチームパンク Suchīmupanku, Movie): A robotic train monster who says "baby" in his sentences and could turn into an actual train. Steam Punk was created when all the monsters in the movie fused together into one monster, abducting the children due to being a weak monster. Once he lost his leverage, Steam Punk shrank and attempted to escape on a train track. But, Ohranger Robo sliced the track apart with Super Crown Crash, with Steam Punk unable to stop as he fell to his death below.
Locker Knight (ロッカーナイト Mashinju Rokkānaito): A robotic shower-head monster that rides on horseback and use his rod to fire lightning.
Cat Signal (ネコシグナル Nekoshigunaru): A robotic traffic sign monster with cat ears, who uses an oil lantern and keychains as weapons and can fire a beam from his eye.
Kabochumpkin (カボチャンプキン Kabochanpukin): A robotic Jack O'Lantern/witch monster, he uses a broom as a weapon.
Jagchuck (ジャグチャック Jaguchakku): A robotic faucet-themed monster, able to extend his mouth and fire water from the faucet on his belly or his mouth. Was rebuilt into the stronger Machine Beast: Bara Jaguchi (バラジャグチ Bara Jaguchi, 31), only to be killed by King Pyramider Battle Formation.
Bara Darts (バラダーツ Bara Dātsu, 9, 33): A scorpion-themed Machine Beast who able to shoot Poison Darts on his tail, causing the target to be infected with a rusting poison. Sent to disable the Ohrangers' ability to use the Ohranger Robo, Bara Darts infects Goro, Shouhei and Yuji through trickery. But when Momo and Juri arrive to beat the Machine Beast for taking out their male teammates, he reveals be in possession of an antidote before Juri offers to work for Bara Darts, winning the Machine Beast's trust by going after Momo and attempts to kill her. However, it all turned out to be a farce by Juri to weasel the antidote into her hands by faking being accidentally hit by a Poison Dart, only to learn she was given a fake container. But Momo uses a hologram to trick Bara Darts in giving the real antidote. After playing football keep away with Acha and the Baras, Momo takes the antidote to the guys while Juri holds off their pursuers. Arriving to Juri's aid in full force, the Ohranger men take out the Baras while the female members double team Bara Darts before being defeated Oh Red's sword. But after being enlarged by Kocha, Bara Darts battles Ohranger Robo and is weaken by robot's Vulcan and Cannon Head configurations before being scrapped by the Crown Final Crash. Another giant Bara Darts was built to become a Super Machine Beast, but was destroyed before it could become one.
Bara Hacker (バラハッカー Bara Hakkā, 10): A computer Machine Beast able to upload a wide array of weapons at his disposal from saws to bombs. On Buldont's suggestion, Bara Hacker is sent to hack into the UA database to access vital info on the Ohrangers' Chouriki Mobiles. Though unable to get the Chouriki Mobile data, Bara Hacker accessed the Ohrangers' arsenal and thus can counter any of the Ohrangers' current weapons. Bara Hacker then proceed to take control of all digital processes in the city, causing anarchy as a result as part of a distraction to access the Chouriki Mobile data at the databank. However, Bara Hacker fell into the Ohrangers' trap and counters their King Smashers and then the Big Bang Buster. However, the Oh Red uses the newly introduced Giant Roller to defeat Bara Hacker, who had no data for him to devise a counter against. Once enlarged by Kocha, Bara Hacker battles Ohranger Robo and is overwhelmed by Horn and Graviton Head configurations with his screen shattered before being scrapped by the Crown Final Crash.
Bara Printer (バラプリンター Bara Purintā, 11, 33): A Machine Beast known as the Baranoian matchmaker, able to scan an image of an object and use it to make people fall in love with it. When seeing that humans took their everyday appliances for granted, Hysteria summoned Bara Printer to use his beam to cause his human victims to fall in love over their appliances. He battles the Ohrangers when they catch wind of his scheme, driving them off as a result. When Hysteria calls for a change of plans to make humans fall for Baranoia and use them to do their dirty work, Acha is sent to see it through. However, Shouhei plays on Acha's own desire to be appreciated for his hard work, convincing Acha's footman to make him a loved figure in the world. Working out as he hoped, Shouhei manages to reflect one of Bara Printer's beams back at him, causing him to obsessed over him and chases him down to embrace him. After making light of Acha's plight and suggesting a honeymoon vacation, Kocha accepts Acha's pleas and blasts Bara Printer back to normal. After being weakened by Oh Green's Crusher attack, Bara Printer is defeated with the Giant Roller. Once enlarged by Kocha, Bara Printer battles Ohranger Robo and is weakened by the Horn Head configuration before being scrapped by the Crown Final Crash. Another giant Bara Printer was built to become a Super Machine Beast but was destroyed before it could become one.
Bara Baby (バラベイビー Bara Beibī, 12): A Machine Beast sent by Bacchus Wrath in a plan to make humans hate babies, thus ensuring the end of the human race. Bara Baby performs this by shooting projecting a beam with the orb on his head, causing any baby within ideal distance to project painful sound waves upon crying that can cause mass damage. The Ohrangers battle Bara Baby as he was about to alter another baby, causing the babies he affected to cause citywide damage to cover his escape. However, Acha reveals the plan is bound to fail due to the mothers' love for their children. As a result, a change of plans is required as Bara Baby kidnaps baby Kou and places him nearby an industrial area so his cries can cause an explosion that would affect the entire city. While others fight off the Baras, Goro fights his way to Kou and saves the baby. Once Kou is given to his mother, the Ohrangers use the Big Bang Buster to take out Bara Baby. Revitalized as a giant, Bara Baby battles Ohranger Robo until he is destroyed with the Crown Final Crash.
Bara Magma (バラマグマ Bara Maguma, 13, 33): A mining-based Machine Beast with smaller mechanoid that enables him to use his Magma Beam and Magma Missile attacks. He is sent to Mt. Fuji to make the volcano active in Buldont's scheme to destroy Tokyo. When the Ohrangers arrive, Bara Magma pursues Momo as he eludes him with the aid of Johnny. Later, Bara Magma proceeds to shove a boy Shouta into the pit as Johnny saves the boy and bites Buldont's hand in the process. As a result, Hysteria has Bara Magma kill off the dog for harming her son. Enraged, Oh Pink battles her way to Bara Magma, fighting him in spite of her injuries until the others arrives and they use the Giant Roller to defeat him. Revitalized a giant, Bara Magma battles Ohranger Robo, with Final Crown Crash finishing him off. Another giant Bara Magma was built to become a Super Machine Beast but was destroyed before it could become one.
Bara Pinokiller (バラピノキラー Bara Pinokirā, 14): A Pinocchio-based Machine Beast created as a scheme to take out the humans while their guard is down with the mass production of Pinocchio-based robots as Juri brought a Pet Pinocchio. Donning a cloak and mask, Pinokiller attacks Goro and Juri as the others arrive, forced to run off and leave his disguise behind. While investigating the matter, Acha plants a detonation device in Juri's Pet Pinocchio in a plan to blow the UA up, with Goro chucking the robot away. Exposed, Bara Pinokiller attempts to kill everyone in the factory, including the son of the restaurant owner. When the Ohrangers arrive, Bara Pinokiller assumes his true form as Shouhei saves the people as the other Ohrangers battle Bara Pinokiller as the factory explodes. With the people running to safety, the Ohrangers defeat Bara Pinokiller with the Giant Roller as he then revitalized a giant. Forming Ohranger Robo, the Ohrangers use the Vulcan Head and Horn Head formations before using the Super Crown Sword to severe the monster's nose before being destroyed.
Bara Revenger (バラリベンジャー Bara Ribenjā, 15): A Machine Beast formed from the collected robot pieces at Baranoia's junkyard and motivated by the grudges of every part he is composed of. Unlike the others, Bara Revenger wanted nothing to do with Baranoia other than to kill Bacchus Wrath yet his power supply is very low. After being knocked off of the moon after being unable to defeat Bacchus Wrath one on one, Bara Revenger descended onto Earth where he attempts to increase his power before he shuts down until the Ohrangers arrives on the scene as he invades them. Though distrusting him at first like others, Yuji attempts to befriend Bara Revenger as he saved a dog from being run over by a car on the street. After being saved with a Chouriki transfusion when about to shut down, Bara Revenger is astonished of the selfless act and vows to help him. However, Acha arrives and has the Baras attack Bara Revenger and Oh Blue. When the others arrive, Bacchus Wrath shows up as he has Acha implant a remote-controlled control device to force Bara Revenger into fighting the Ohrangers against his will. In spite of his attempts to snap Bara Revenger out of it, Oh Blue is forced to mortally wound him with the Giant Roller, returning him to normal as the Ohrangers vow to succeed where he failed as he limps off to a garbage dump where he shuts down and re-crumbles into spare parts.
Bara Devil (バラデビル Bara Debiru, 16): A Machine Beast who could attack with music from the piano on his solar plexus, sent by Buldont to play his Devil World symphony to invoke natural disasters. But as Shouhei arrives, a boy named Jun comes from the future as a result of Time Split caused by the music. Because the Jun's pendant can counteract his music, Bara Devil goes after him. He enlarges as OhGreen calls for the Chouriki Mobiles, with Oh Red using the Giant Roller to free his comrade from Bara Devil's grip as Oh Green borrows Jun's pendant for Miura to analyze as Ohranger Robo battles Bara Devil. Using the Horn Head formation, Ohranger Robo forces a timeslip to occur so Jun can return to his time. With that done, Bara Devil is destroyed by with the Crown Final Crash.
Bara Vacuum (バラバキューム Bara Bakyūmu, 17, 33): A Machine Beast brought in by Buldont to acquire Yuji's Power Brace from Seiichi Kuroda, able to suck people up into his vacuum-like tube. and use his machine forearm. After sucking Yuji in, Bara Vacuum goes after the Power Brace in a keep away until he sucks Oh Red in with the Power Brace. Once Yuji gets his Power Brace back, he and OhRed get out and fight Bara Vacuum before the Ohrangers finish him off with the Giant Roller. Once revitalize by Kocha, Bara Vacuum, Ohranger Robo uses Graviton Head formation before destroying him with the Crown Final Crash. Another giant Bara Vacuum was built to become a Super Machine Beast but was destroyed before it could become one.
Bara Ivy (バラアイビー Bara Aibī, 18, 33): An plant-like Machine Beast that is able to burrow underground and uses his vines to attack people. Integrated into Kuroda's body, Bara Ivy kidnaps Momo, Shouhei and Juri while activating Shigeru's programming to be also integrated into Kuroda's cyborg body as well. When Shigeru breaks out of Kuroda's control, Bara Ivy disconnects from Kuroda and kills him before fighting the Ohrangers as they use the Big Bang Buster to deactivate him. Revitalized by Kocha, Bara Ivy battles Ohranger Robo and is destroyed by the Crown Final Crash. Another giant Bara Ivy was built to become a Super Machine Beast but was destroyed before it could become one.
Bara Builder (バラビルダー Bara Birudā, 19): A giant Machine Beast Bacchus Wrath has suck up the city's electricity before fighting Ohranger Robo, displaying its abilities to self-upgrade itself to counter Ohranger Robo's abilities. Faking a surrender, Bara Builder drains Ohranger Robo of its energy before heavily damaging the giant robot. Learning of Red Puncher's existence, Bacchus Wrath attempts to stop Oh Red from getting to the new robot. But once awakened and Oh Red soothes the mecha's berserker nature, Red Puncher destroys Bara Builder with Puncher Gatling.
Bara Boxer (バラボクサー Bara Bokusā, 20): A boxer-themed robot armed with a boxing bell mallet that Buldont had built to counter Red Puncher. First appearing in normal size, Bara Boxer proves his combat superiority to Oh Green before being enlarged to overpowers Red Puncher in what turned out to be an Exhibition Match. After being fitted with spiked boxing gloves, Bara Boxer is sent back to Earth to finish the job. But due to learning some boxing moves from Shouhei and friends, Oh Red uses Red Puncher in its full fighting potential to turn the tables before destroying Bara Boxer with Magna Puncher.
Bara Kendama (バラケンダマ Bara Kendama, 21): Built in the likeness of the giant Kendama Robo, Bara Kendama was used by Hysteria pose as the robot when it was donated to be piloted by Oh Blue. Once the trap is sprung, Bara Kendama assumes his true face and hold Yuji hostage he goes on a rampage with Hysteria. Oh Red is powerless to fight back in the Red Puncher out of fear of killing his teammate until he leads Bara Kendama into a trap by Oh Green piloting Kendama Robo. This allows Oh Pink and Oh Yellow to get Yuji out so Red Puncher can finish the Machine Beast off with its Magna Puncher.
Bara Madillo (バラマジロ Bara Majiro, 22): A giant armadillo-themed robot that could roll up into a spiked ball, making it nearly invulnerable to various attacks. Bara Madillo easily overpowered Ohranger Robo and Red Puncher, forcing them to attempt to form Buster Ohranger Robo. But as the two robots are unable to due to the latter's memory chip missing, Ohranger Robo and Red Puncher are forced to fall back. Learning that the chip ended up in the possession of a boy named Satoru, Bara Madillo is sent after the boy with Ohranger Robo holding the Machine Beast off while Oh Red gets Satoru to safety. But after Bacchus Wrath destroys the portable computer holding the chip, it turns out Goro got tattooed by the blast with program code on his back which Miura types into Red Puncher's programming as it is deployed. Once Buster Ohranger Robo is formed, it uses Big Cannon Burst to destroy Bara Madillo.
Bara Clothes (バラクローズ Bara Kurōzu, 23): A silkworm-like Machine Beast who could spray silk from his mouth. After finding Buldont and Bacchus Wrath with a swimsuit issue, an upset Hysteria deploys Bara Clothes to use his powers to convert people's outfits into battle armor that allows him to control the wearers through the projection from his head. The male Ohrangers and Momo are among the affected as Juri is forced to run off after her Power Brace is taken. Though overpowered and outnumbered, Juri realized Bara Clothes' method and resorted in fighting him in a bikini as she breaks the Machine Beast's hold over her teammates. Once defeated by Ohrangers, Bara Clothes is enlarged as Ohranger Robo fights him with Red Puncher's support before they form Buster Ohranger Robo and destroy him.
Bara Kakka (バラカッカ Bara Kakka, 24): An odd Napoleon-like robot able to assume human form, Bara Kekka arrives to Japan during Tanabata. Falling in love with Momo at first sight, having an obsession over the girl and uses his talents as a master of disguise and dimensions to cause trouble for her as he makes his declaration of love to her while assuming his true form when she turns him down. Though he had the advantage over Oh Pink and intended to boil her alive, Bara Kakka has a change of heart after Auntie stops him and puts him in his place. After freeing Momo, Bara Kakka is reminded his mission as the Ohrangers fight him before they defeat him with the Giant Roller. Enlarged, Bara Kakka battles Ohranger Robo and Red Puncher before they form Buster Ohranger Robo to finish him off.
Bara Hungry (バラハングリー Bara Hangurī, 25): A Machine Beast with a hard body able to cover food in a mold. However, he sidetracks when he intrudes a family celebrating the father's birthday, literally eating them out of house and home before he gets drunk on sake. The Ohrangers acquire the unconscious Bara Hungry and attempt to blow him up when Acha reactivates Bara Hungry as he runs off. After a scolding from Acha, Bara Hungry resumes his plan until the Ohrangers lure him out in a festive battle. Distracted by festive dancing, Bara Hungry is tricked into drinking sake and gets too drunk to defend himself against the new Ole Bazooka. After being enlarged, Bara Hungry covers Ohranger Robo in mold before proceeding to eat it piece by piece. Again distracted, Bara Hungry is sucker punched by Red Puncher as the Machine Beast is destroyed by Buster Ohranger Robo.
Bara Goblin (バラゴブリン Bara Goburin, 26 & 27): A Machine Beast kept in a cage by Keris as a pet, she summons Bara Goblin to handle Oh Blue, Oh Yellow, and Oh Pink while attempting to kill Dorin. But before he can get to Dorin, Riki arrives to save her. Becoming King Ranger, he uses King Victory Flash to defeats Bara Goblin before Kocha enlarges him. After overpowering Ohranger Robo and Red Puncher, Bara Goblin is easily destroyed by King Pyramider.
Bara King (バラキング Bara Kingu, 27 & 28): Another of Keris' pets, Bara King is designed after King Ranger after she abducts him. Sent to Earth to capture little girls for his mistress' plan, Bara King is made to mess with the Ohrangers by making them think Riki is converted into a Machine Beast. But once Riki returns, the truth is revealed as King Ranger defeats Bara King. Once enlarged by Kocha, Bara King aids Keris in fighting Ohranger Robo and Red Puncher. But once King Pyramider assumes Carrier Formation, Bara King is destroyed by its barrage.
Bara Tarantula (バラタランチュラ Bara Taranchura, 29): A tarantula-themed robot able to shrink into a spider used by Acha to implant children with receivers to make it seem that they are geniuses. The Machine Beast can also use the receivers and the mini-spiders on his body to control his targets. After getting the signal in her nephew's class, Juri and company track down Bara Tarantula's signal to his hideout. After being defeated by King Ranger and damaged by the Ole Bazooka, Bara Tarantula is enlarged by Kocha before overpowering Ohranger Robo and Red Puncher until King Pyramider intervened. In the end, Bara Tarantula is scrapped by King Pyramider Battle Formation.
Bara Gusuka (バラグースカ Bara Gūsuka, 30): A sloth-like Machine Beast with a nightcap whose very presence and singing causes everyone around him to sleep. Though Bacchus Wrath wanted the Machine Beast scrapped for it, Hysteria suggests using him on the humans. After the Octopus he was on was shot down, Bara Gusuka manages to put the Ohrangers to sleep so Acha can kill them. However, immune to the Machine Beast's effects, King Ranger interferes and is able to scrap Bara Gusuka as he was immune. However, Henna rebuilds Bara Gusuka into a wind up lullaby machine, resuming his mission before Acha remodifies Bara Gusuka into a fighting machine with a speaker where his face used to be. While his second form was much more powerful to have King Ranger feel the effects along people far away, Bara Gusuka is stopped when Dorin has Paku bite the power cord. Rendered powerless, Bara Gusaka is taken down by the Ole Bazooka. Enlarged by Kocha, Bara Gusuka is easily scrapped by King Pyramider Battle Formation to Hysteria's dismay.
Bara Nightmare (バラナイトメア Bara Naitomea, 32): A white robot that wields a cleaver, with the power to trap people's minds in his dream world. He defied Baranoia and fled to Earth in the year 1988, taking on a disguise of a human covered in black robes. A psychotic hebephile, he began abducting young girls he fancied and trapped their minds in his nightmare while keeping their bodies in suspended animation. This way he could watch them live out his fantasies forever. But his original victim was Momo's best friend Mayumi, and after eleven years her mind reached out to momo in dreams for help. When Momo began investigating he tried to trap her in his nightmare but was thwarted by her fellow Ohrangers. Killed by the Chouriki Dynamite Group attack, and the girls were freed to resume their stolen lives.
Bara Mammoth (バラマンモス Bara Manmosu, 35): A giant mammoth-themed robot. First monster to be summoned by Bomber the Great. Killed by Oh Blocker.
Machine Beast: Bara Skunk (バラスカンク Bara Sukanku, 36): A skunk-themed robot who works for Bomber the Great in creating a new Baranoia. Refer to himself as the most feared Machine Beast in the universe. He produces gas from his tail and mouth after eating garbage. This gas is so intense that instantly breaks down any material to its atomic level. Killed by OhBlocker (with the help of Tackle Boy).
Bara Police (バラポリス Bara Porisu, 37–38): A police officer-themed robot. Disguising himself as a police chief, he had numerous policemen under his control. Killed by Gunmajin and OhBlocker.
Bara Gold (バラゴールド Bara Gōrudo, 39–40): A golden retriever-themed robot. Last monster used by Bomber the Great. Turned anything into gold, and as it did so, it grew in size. Among things he turned into gold are OhGreen's legs and arms, as well as several parts of Blue Blocker, Pink Blocker, Yellow Blocker and Red Blocker. Everything turned to gold returned to normal after Red Puncher destroyed Bara Gold's device. Killed by King Pyramider Battle Formation.
Bara Hunter (バラハンター Bara Hantā, 42): A robotic monster that could blast lasers from its shoulder-protruding cannon. Killed by King Pyramider Battle Formation (OhBlocker).
Bara Fraud (バラペテン Bara Peten, 43): A robotic monster that could blast lasers from the cannon on its forehead. Killed by Oh Blocker (with the help of Tackle Boy).
Bara Guard (バラガード Bara Gādo, 44): At first disguised himself as a dog as part of a trap. A robot monster that wielded a large bazooka-like weapon in battle. This robot could blast lasers and was very strong. Although he survives OhBlocker and Tackle Boy's attack, Bara Guard is killed by Gunmajin.
Bara Micron (バラミクロン Bara Mikuron, 45–46): A centipede-like monster that used a "divide-and-conquer" tactic, breaking its body by segments which each had their own method of attack on its enemies. Could spray a beam of small dark particles to make any machine have a mind of its own, including Red Puncher and Oh Blocker, and taking away the Ohrangers' henshin powers (including King Ranger's). Killed by energy summoned by Dorin (that sent the Ohrangers in King Pyramider elsewhere) in the shape of a pyramid.
Bara Gear (バラハグルマ Bara Haguruma, Ohranger vs. Kakuranger): Used by Bacchus Wrath in the bet he made with his son. Bara Gear is capable of placing his "Super Gears" on any machine to take it over as well as powering-up the Barlo Soldiers. He also briefly piloted Ohranger Robo after placing a gear in it. Can also attack with "Gear Bombs". He was later combined with Onbu-Bake to form Onbu-Gear (オンブハグルマ Onbuhaguruma) through their Super Machine Youkai Fusion (超マシン妖怪合体 Chō Mashin Yōkai Gattai), who was killed by Ohranger Robo, Red Puncher and Oh Blocker with Tackle Boy.
Bara Mobile (バラモビル Bara Mobiru, CarRanger vs. Ohranger): The last of the Baranoians, he intended to enlist the aid of the Bowzock to establish his own empire of "car-people". Originally aided by the Carrangers (or specifically Kyousuke Jinnai/Red Racer) due to confusion between the teams, he soon kidnapped Goro to make him the first of his "car-people", only to be thwarted by the Carranger and Ohranger. He was able to enlarge himself without the aid of Acha and Kocha. Killed by Ohranger Robo.
Episodes
Movie
The movie version of Chouriki Sentai Ohranger, was directed by Kobayashi Yoshiaki and written by Shōzō Uehara. It premiered in Japan on April 15, 1995, at Toei Super Hero Fair '95. It was originally shown as a triple feature alongside Mechanical Violator Hakaider and the feature film version of Juukou B-Fighter.
Crossovers
(Takes place between Episodes 33 and 34 of Chouriki Sentai Ohranger) – A 1996 direct-to-video movie which depicts a crossover between Ohranger and Ninja Sentai Kakuranger.
(Takes place between Episodes 38 and 39 of Gekisou Sentai Carranger) – A 1997 direct-to-video crossover between Gekisou Sentai Carranger and Ohranger.
Cast
:
:
:
:
:
:
: ,
:
Kotaro Henna:
Voice actors
Paku:
:
:
:
:
: ,
:
:
:
Narration:
Songs
Opening theme
Lyrics:
Composition:
Arrangement:
Artist:
Ending theme
Lyrics: Saburo Yatsude
Composition: Yasuo Kosugi
Arrangement:
Artist: Kentarō Hayami
Episodes: 1–47
Lyrics/Composition: (as KYOKO)
Arrangement:
Artist: Kentarō Hayami
Episodes: 48
Notes
References
External links
Chouriki Sentai Ohranger at the official Super Sentai website
Official Shout! Factory page
Official Shout Factory TV page
1995 Japanese television series debuts
1996 Japanese television series endings
Super Sentai
Television series set in 1999
Television series set in the future
Japanese action television series
Japanese fantasy television series
Japanese science fiction television series
1990s Japanese television series
Fictional soldiers |
19986969 | https://en.wikipedia.org/wiki/Rise%20and%20Resurrection%20of%20the%20American%20Programmer | Rise and Resurrection of the American Programmer | Rise and Resurrection of the American Programmer is a book written by Edward Yourdon in 1996. It is the sequel to Decline and Fall of the American Programmer. In the original, written at the beginning of the '90s, Yourdon warned American programmers that their business was not sustainable against foreign competition. By the middle of the decade Microsoft had released Windows 95, which marked a groundbreaking new direction for the operating system, the internet was beginning to rise as a serious consumer marketplace, and the Java software platform had made its first public release.
Due to such large changes in the state of the software industry, Yourdon reversed some of his original predictions. Notably absent from the book is any significant consideration of the open source software movement, particularly the development of the Linux kernel and the GNU operating system, which would come to have increasing significance in the coming decade in shaping the software industry. Both the internet, Microsoft's business strategy, and Java, which all feature significantly in Yourdon's thesis, would come to be heavily influenced by this phenomenon.
Chapter Outline
Part One: Decline & Fall Reexamined
1. The Original Premise
2. Peopleware
3. The Other Silver Bullets
Part Two: Repaving Cowpaths
4. System Dynamics
5. Personal Software Practices
6. Best Practices
7. Good-Enough Software
Part Three: The Brave New World
8. Service Systems
9. The Internet
10. Java and the New Programming Paradigm
11. The Microsoft Paradigm
12. Embedded Systems and Brave New Worlds
13. Past, Present, and Future
Appendix: An Updated Programmer's Bookshelf
1996 non-fiction books
Software development books
Software quality
Software industry
Science and technology in the United States
Prentice Hall books |
3370278 | https://en.wikipedia.org/wiki/History%20of%20watches | History of watches | The history of watches began in 16th-century Europe, where watches evolved from portable spring-driven clocks, which first appeared in the 15th century.
The watch was developed by inventors and engineers from the 16th century to the mid-20th century as a mechanical device, powered by winding a mainspring which turned gears and then moved the hands; it kept time with a rotating balance wheel. In the 1960s the invention of the quartz watch which ran on electricity and kept time with a vibrating quartz crystal, proved a radical departure for the watchmaking industry. During the 1980s quartz watches took over the market from mechanical watches, a process referred to as the "quartz crisis". Although mechanical watches still sell at the high end of the watch market, the vast majority of watches have quartz movements.
One account of the origin of the word "watch" suggests that it came from the Old English word woecce which meant "watchman", because town watchmen used watches to keep track of their shifts. Another theory surmises that the term came from 17th-century sailors, who used the new mechanisms to time the length of their shipboard watches (duty shifts).
The Oxford English Dictionary records the word watch in association with a timepiece from at least as early as 1542.
Clock-watch
The first timepieces to be worn, made in the 16th century beginning in the German cities of Nuremberg and Augsburg, were transitional in size between clocks and watches. Portable timepieces were made possible by the invention of the mainspring in the early 15th century. Nuremberg clockmaker Peter Henlein (or Henle or Hele) (1485-1542) is often credited as the inventor of the watch. He was one of the first German craftsmen who made "clock-watches", ornamental timepieces worn as pendants, which were the first timepieces to be worn on the body. His fame is based on a passage by Johann Cochläus in 1511,
Peter Hele, still a young man, fashions works which even the most learned mathematicians admire. He shapes many-wheeled clocks out of small bits of iron, which run and chime the hours without weights for forty hours, whether carried at the breast or in a handbag
However, other German clockmakers were creating miniature timepieces during this period, and there is no evidence Henlein was the first.
These 'clock-watches' were fastened to clothing or worn on a chain around the neck. They were heavy drum-shaped cylindrical brass boxes several inches in diameter, engraved and ornamented. They had only an hour hand. The face was not covered with glass, but usually had a hinged brass cover, often decoratively pierced with grillwork so the time could be read without opening. The movement was made of iron or steel and held together with tapered pins and wedges, until screws began to be used after 1550. Many of the movements included striking or alarm mechanisms. They usually had to be wound twice a day. The shape later evolved into a rounded form; these were later called Nuremberg eggs. Still later in the century there was a trend for unusually-shaped watches, and clock-watches shaped like books, animals, fruit, stars, flowers, insects, crosses, and even skulls (Death's head watches) were made.
These early clock-watches were not worn to tell the time. The accuracy of their verge and foliot movements was so poor, with errors of perhaps several hours per day, that they were practically useless. They were made as jewelry and novelties for the nobility, valued for their fine ornamentation, unusual shape, or intriguing mechanism, and accurate timekeeping was of very minor importance.
Pocketwatch
Styles changed in the 17th century and men began to wear watches in pockets instead of as pendants (the woman's watch remained a pendant into the 20th century). This is said to have occurred in 1675 when Charles II of England introduced waistcoats. This was not just a matter of fashion or prejudice; watches of the time were notoriously prone to fouling from exposure to the elements, and could only reliably be kept safe from harm if carried securely in the pocket. To fit in pockets, their shape evolved into the typical pocketwatch shape, rounded and flattened with no sharp edges. Glass was used to cover the face beginning around 1610. Watch fobs began to be used, the name originating from the German word fuppe, a pocket. Later in the 1800s Prince Albert, the consort to Queen Victoria, introduced the 'Albert chain' accessory, designed to secure the pocket watch to the man's outergarment by way of a clip. The watch was wound and also set by opening the back and fitting a key to a square arbor, and turning it.
The timekeeping mechanism in these early pocketwatches was the same one used in clocks, invented in the 13th century; the verge escapement which drove a foliot, a dumbbell shaped bar with weights on the ends, to oscillate back and forth. However, the mainspring introduced a source of error not present in weight-powered clocks. The force provided by a spring is not constant, but decreases as the spring unwinds. The rate of all timekeeping mechanisms is affected by changes in their drive force, but the primitive verge and foliot mechanism was especially sensitive to these changes, so early watches slowed down during their running period as the mainspring ran down. This problem, called lack of isochronism, plagued mechanical watches throughout their history.
Efforts to improve the accuracy of watches prior to 1657 focused on evening out the steep torque curve of the mainspring. Two devices to do this had appeared in the first clock-watches: the stackfreed and the fusee. The stackfreed, a spring-loaded cam on the mainspring shaft, added a lot of friction and was abandoned after about a century. The fusee was a much more lasting idea. A curving conical pulley with a chain wrapped around it attached to the mainspring barrel, it changed the leverage as the spring unwound, equalizing the drive force. Fusees became standard in all watches, and were used until the early 19th century. The foliot was also gradually replaced with the balance wheel, which had a higher moment of inertia for its size, allowing better timekeeping.
Balance spring
A great leap forward in accuracy occurred in 1657 with the addition of the balance spring to the balance wheel, an invention disputed both at the time and ever since between Robert Hooke and Christiaan Huygens. Prior to this, the only force limiting the back and forth motion of the balance wheel under the force of the escapement was the wheel's inertia. This caused the wheel's period to be very sensitive to the force of the mainspring. The balance spring made the balance wheel a harmonic oscillator, with a natural 'beat' resistant to disturbances. This increased watches' accuracy enormously, reducing error from perhaps several hours per day to perhaps 10 minutes per day, resulting in the addition of the minute hand to the face from around 1680 in Britain and 1700 in France. The increased accuracy of the balance wheel focused attention on errors caused by other parts of the movement, igniting a two century wave of watchmaking innovation.
The first thing to be improved was the escapement. The verge escapement was replaced in quality watches by the cylinder escapement, invented by Thomas Tompion in 1695 and further developed by George Graham in the 1720s. In Britain a few quality watches went to the duplex escapement, invented by Jean Baptiste Dutertre in 1724. The advantage of these escapements was that they only gave the balance wheel a short push in the middle of its swing, leaving it 'detached' from the escapement to swing back and forth undisturbed during most of its cycle.
During the same period, improvements in manufacturing such as the tooth-cutting machine devised by Robert Hooke allowed some increase in the volume of watch production, although finishing and assembling was still done by hand until well into the 19th century.
Temperature compensation and chronometers
The Enlightenment view of watches as scientific instruments brought rapid advances to their mechanisms. The development during this period of accurate marine chronometers required in celestial navigation to determine longitude during sea voyages produced many technological advances that were later used in watches. It was found that a major cause of error in balance wheel timepieces was changes in elasticity of the balance spring with temperature changes. This problem was solved by the bimetallic temperature compensated balance wheel invented in 1765 by Pierre Le Roy and improved by Thomas Earnshaw. This type of balance wheel had two semicircular arms made of a bimetallic construction. If the temperature rose, the arms bent inward slightly, causing the balance wheel to rotate faster back and forth, compensating for the slowing due to the weaker balance spring. This system, which could reduce temperature induced error to a few seconds per day, gradually began to be used in watches over the next hundred years.
The going barrel invented in 1760 by Jean-Antoine Lépine provided a more constant drive force over the watch's running period, and its adoption in the 19th century made the fusee obsolete. Complicated pocket chronometers and astronomical watches with many hands and functions were made during this period.
Lever escapement
The lever escapement, invented by Thomas Mudge in 1759 and improved by Josiah Emery in 1785, gradually came into use from about 1800 onwards, chiefly in Britain; it was also adopted by Abraham-Louis Breguet, but Swiss watchmakers (who by now were the chief suppliers of watches to most of Europe) mostly adhered to the cylinder until the 1860s. By about 1900, however, the lever was used in almost every watch made. In this escapement the escape wheel pushed on a T shaped 'lever', which was unlocked as the balance wheel swung through its centre position and gave the wheel a brief push before releasing it. The advantages of the lever was that it allowed the balance wheel to swing completely free during most of its cycle; due to 'locking' and 'draw' its action was very precise; and it was self-starting, so if the balance wheel was stopped by a jar it would start again.
Jewel bearings, introduced in England in 1702 by the Swiss mathematician Nicolas Fatio de Duillier, also came into use for quality watches during this period. Watches of this period are characterised by their thinness. New innovations, such as the cylinder and lever escapements, allowed watches to become much thinner than they had previously been. This caused a change in style. The thick pocketwatches based on the verge movement went out of fashion and were only worn by the poor, and were derisively referred to as "onions" and "turnips".
Mass production
At Vacheron Constantin, Geneva, Georges-Auguste Leschot (1800–1884), pioneered the field of interchangeability in clockmaking by the invention of various machine tools. In 1830 he designed an anchor escapement, which his student, Antoine Léchaud, later mass-produced. He also invented a pantograph, allowing some degree of standardisation and interchangeability of parts on watches fitted with the same calibre.
The British had predominated in watch manufacture for much of the 17th and 18th centuries, but maintained a system of production that was geared towards high quality products for the elite. Although there was an attempt to modernise clock manufacture with mass production techniques and the application of duplicating tools and machinery by the British Watch Company in 1843, it was in the United States that this system took off. Aaron Lufkin Dennison started a factory in 1851 in Massachusetts that used interchangeable parts, and by 1861 was running a successful enterprise incorporated as the Waltham Watch Company.
The railroads' stringent requirements for accurate watches to safely schedule trains drove improvements in accuracy. The engineer Webb C. Ball, established around 1891 the first precision standards and a reliable timepiece inspection system for Railroad chronometers. Temperature-compensated balance wheels began to be widely used in watches during this period, and jewel bearings became almost universal. Techniques for adjusting the balance spring for isochronism and positional errors discovered by Abraham-Louis Breguet, M. Phillips, and L. Lossier were adopted. The first international watch precision contest took place in 1876, during the International Centennial Exposition in Philadelphia (the winning four top watches, which outclassed all competitors, had been randomly selected out of the mass production line), on display was also the first fully automatic screw-making machine. By 1900, with these advances, the accuracy of quality watches, properly adjusted, topped out at a few seconds per day.
The American clock industry, with scores of companies located in Connecticut's Naugatuck Valley, was producing millions of clocks, earning the region the nickname, "Switzerland of America". The Waterbury Clock Company was one of the largest producers for both domestic sales and export, primarily to Europe. Today its successor, Timex Group USA, Inc. is the only remaining watch company in the region.
From about 1860, key winding was replaced by keyless winding, where the watch was wound by turning the crown. The pin pallet escapement, an inexpensive version of the lever escapement invented in 1876 by Georges Frederic Roskopf was used in cheap mass-produced watches, which allowed ordinary workers to own a watch for the first time; other cheap watches used a simplified version of the duplex escapement, developed by Daniel Buck in the 1870s.
During the 20th century, the mechanical design of the watch became standardized, and advances were made in materials, tolerances, and production methods. The bimetallic temperature-compensated balance wheel was made obsolete by the discovery of low-thermal-coefficient alloys invar and elinvar. A balance wheel of invar with a spring of elinvar was almost unaffected by temperature changes, so it replaced the complicated temperature-compensated balance. The discovery in 1903 of a process to produce artificial sapphire made jewelling cheap. Bridge construction superseded 3/4 plate construction.
Wristwatch
From the beginning, wristwatches were almost exclusively worn by women, while men used pocketwatches up until the early 20th century. The concept of the wristwatch goes back to the production of the very earliest watches in the 16th century. Some people say the world's first wristwatch was created by Abraham-Louis Breguet for Caroline Murat, Queen of Naples, in 1810. However, Elizabeth I of England received a wristwatch from Robert Dudley in 1571, described as an arm watch, 229 years earlier than the 1810 Abraham-Louis Breguet. By the mid nineteenth century, most watchmakers produced a range of wristwatches, often marketed as bracelets, for women.
Wristwatches were first worn by military men towards the end of the nineteenth century, when the importance of synchronizing maneuvers during war without potentially revealing the plan to the enemy through signaling was increasingly recognized. It was clear that using pocket watches while in the heat of battle or while mounted on a horse was impractical, so officers began to strap the watches to their wrist. The Garstin Company of London patented a 'Watch Wristlet' design in 1893, although they were probably producing similar designs from the 1880s. Clearly, a market for men's wristwatches was coming into being at the time. Officers in the British Army began using wristwatches during colonial military campaigns in the 1880s, such as during the Anglo-Burma War of 1885.
During the Boer War, the importance of coordinating troop movements and synchronizing attacks against the highly mobile Boer insurgents was paramount, and the use of wristwatches subsequently became widespread among the officer class. The company Mappin & Webb began production of their successful 'campaign watch' for soldiers during the campaign at the Sudan in 1898 and ramped up production for the Boer War a few years later.
These early models were essentially standard pocketwatches fitted to a leather strap, but by the early 20th century, manufacturers began producing purpose-built wristwatches. The Swiss company, Dimier Frères & Cie patented a wristwatch design with the now standard wire lugs in 1903. In 1904, Alberto Santos-Dumont, an early Brazilian aviator, asked his friend, a French watchmaker called Louis Cartier, to design a watch that could be useful during his flights. Hans Wilsdorf moved to London in 1905 and set up his own business with his brother-in-law Alfred Davis, Wilsdorf & Davis, providing quality timepieces at affordable prices – the company later became Rolex. Wilsdorf was an early convert to the wristwatch, and contracted the Swiss firm Aegler to produce a line of wristwatches. His Rolex wristwatch of 1910 became the first such watch to receive certification as a chronometer in Switzerland and it went on to win an award in 1914 from Kew Observatory in London.
The impact of the First World War dramatically shifted public perceptions on the propriety of the man's wristwatch, and opened up a mass market in the post-war era. The creeping barrage artillery tactic, developed during the War, required precise synchronization between the artillery gunners and the infantry advancing behind the barrage. Service watches produced during the War were specially designed for the rigours of trench warfare, with luminous dials and unbreakable glass. Wristwatches were also found to be needed in the air as much as on the ground: military pilots found them more convenient than pocket watches for the same reasons as Santos-Dumont had. The British War Department began issuing wristwatches to combatants from 1917.
The company H. Williamson Ltd., based in Coventry, was one of the first to capitalize on this opportunity. During the company's 1916 AGM it was noted that "...the public is buying the practical things of life. Nobody can truthfully contend that the watch is a luxury. It is said that one soldier in every four wears a wristlet watch, and the other three mean to get one as soon as they can." By the end of the War, almost all enlisted men wore a wristwatch, and after they were demobilized, the fashion soon caught on – the British Horological Journal wrote in 1917 that "...the wristlet watch was little used by the sterner sex before the war, but now is seen on the wrist of nearly every man in uniform and of many men in civilian attire." By 1930, the ratio of wrist- to pocketwatches was 50 to 1. The first successful self-winding system was invented by John Harwood in 1923.
In 1961, the first wristwatch traveled to space on the wrist of Yuri Gagarin on Vostok 1.
Electric watch
The first generation of electric-powered watches came out during the 1950s. These kept time with a balance wheel powered by a solenoid, or in a few advanced watches that foreshadowed the quartz watch, by a steel tuning fork vibrating at 360 Hz, powered by a solenoid driven by a transistor oscillator circuit. The hands were still moved mechanically by a wheel train. In mechanical watches the self winding mechanism, shockproof balance pivots, and break resistant 'white metal' mainsprings became standard. The jewel craze caused 'jewel inflation' and watches with up to 100 jewels were produced.
Quartz watch
In 1959, Seiko placed an order with Epson (a daughter company of Seiko and the 'brain' behind the quartz revolution) to start developing a quartz wristwatch. The project was codenamed 59A. By the 1964 Tokyo Summer Olympics, Seiko had a working prototype of a portable quartz watch which was used as the time measurements throughout the event.
The first quartz watch to enter production was the Seiko 35 SQ Astron, which hit the shelves on 25 December 1969, which was the world's most accurate wristwatch to date.
Since the technology having been developed by contributions from Japanese, American and Swiss, nobody could patent the whole movement of the quartz wristwatch, thus allowing other manufacturers to participate in the rapid growth and development of the quartz watch market, This ended — in less than a decade — almost 100 years of dominance by the mechanical wristwatch legacy.
The introduction of the quartz watch in 1969 was a revolutionary improvement in watch technology. In place of a balance wheel which oscillated at 5 beats per second, it used a quartz crystal resonator which vibrated at 8,192 Hz, driven by a battery-powered oscillator circuit. In place of a wheel train to add up the beats into seconds, minutes, and hours, it used digital counters. The higher Q factor of the resonator, along with quartz's low temperature coefficient, resulted in better accuracy than the best mechanical watches, while the elimination of all moving parts made the watch more shock-resistant and eliminated the need for periodic cleaning. The first digital electronic watch with an LED display was developed in 1970 by Pulsar. In 1974 the Omega Marine Chronometer was introduced, the first wrist watch to hold Marine Chronometer certification, and accurate to 12 seconds per year.
Accuracy increased with the frequency of the crystal used, but so did power consumption. So the first generation watches had low frequencies of a few kilohertz, limiting their accuracy. The power saving use of CMOS logic and LCDs in the second generation increased battery life and allowed the crystal frequency to be increased to 32,768 Hz resulting in accuracy of 5–10 seconds per month. By the 1980s, quartz watches had taken over most of the watch market from the mechanical watch industry. This upheaval, which saw the majority of watch manufacturing move to the Far East, is referred to in the industry as the "quartz crisis".
In 2010, Miyota (Citizen Watch) of Japan introduced a newly developed movement that uses a new type of quartz crystal with ultra-high frequency (262.144 kHz) which is claimed to be accurate to +/- 10 seconds a year, and has a smooth sweeping second hand rather than one that jumps.
In 2019, Citizen Watch advanced the accuracy of a quartz watch to +/- 1 second a year. The improved accuracy was achieved by using an AT-cut crystal which oscillates at 8.4 MHz (8,388,608 Hz). The watch maintains its greater accuracy by continuously monitoring and adjusting for frequency and temperature shifts once every minute.
Radio-controlled wristwatch
In 1990, Junghans offered the first radio-controlled wristwatch, the MEGA 1. In this type, the watch's quartz oscillator is set to the correct time daily by coded radio time signals broadcast by government-operated time stations such as JJY, MSF, RBU, DCF77, and WWVB, received by a radio receiver in the watch. This allows the watch to have the same long-term accuracy as the atomic clocks which control the time signals. Recent models are capable of receiving synchronization signals from various time stations worldwide.
Atomic wristwatch
In 2013 Bathys Hawaii introduced their Cesium 133 Atomic Watch the first watch to keep time with an internal atomic clock. Unlike the radio watches described above, which achieve atomic clock accuracy with quartz clock circuits which are corrected by radio time signals received from government atomic clocks, this watch contains a tiny cesium atomic clock on a chip. It is reported to keep time to an accuracy of one second in 1000 years.
The watch is based on a chip developed by the breakthrough Chip Scale Atomic Clock (CSAC) program of the US Defense Advanced Research Projects Agency (DARPA) which was initiated in 2001, and produced the first prototype atomic clock chip in 2005. Symmetricom began manufacturing the chips in 2011. Like other cesium clocks the watch keeps time with an ultraprecise 9.192631770 GHz microwave signal produced by electron transitions between two hyperfine energy levels in atoms of cesium, which is divided down by digital counters to give a 1 Hz clock signal to drive the hands. On the chip, liquid metal cesium in a tiny capsule is heated to vaporize the cesium. A laser shines a beam of infrared light modulated by a microwave oscillator through the capsule onto a photodetector. When the oscillator is at the precise frequency of the transition, the cesium atoms absorb the light, reducing the output of the photodetector. The output of the photodetector is used as feedback in a phase locked loop circuit to keep the oscillator at the correct frequency. The breakthrough that allowed a rack-sized cesium clock to be shrunk small enough to fit on a chip was a technique called coherent population trapping, which eliminated the need for a bulky microwave cavity.
The watch was designed by John Patterson, head of Bathys, who read about the chip and decided to design a watch around it, financed by a Kickstarter campaign. Due to the large 1½ inch chip the watch is large and rectangular. It must be recharged every 30 hours.
Smartwatch
A smartwatch is a computer worn on the wrist, a wireless digital device that may have the capabilities of a cellphone, portable music player, or a personal digital assistant. By the early 2010s some had the general capabilities of a smartphone, having a processor with a mobile operating system capable of running a variety of mobile apps.
The first smartwatch was the Linux Watch, developed in 1998 by Steve Mann which he presented on February 7, 2000. Seiko launched the Ruputer in Japan- it was a wristwatch computer and it had a 3.6 MHz processor. In 1999, Samsung launched the world's first watch phone. It was named the SPH-WP10. It had a built-in speaker and mic, a protruding antenna and a monochrome LCD screen and 90 minutes of talk time. IBM made a prototype of a wristwatch that was running Linux. The first version had 6 hours battery life and it got extended to 12 in its more advanced version. This device got better when IBM added an accelerometer, a vibrating mechanism and a fingerprint sensor. IBM joined with Citizen Watch Co. to create the WatchPad. It features a 320x240 QVGA monochrome touch-sensitive display and it ran Linux version 2.4. It displayed calendar software, Bluetooth, 8 MB RAM, and 16 MB of flash memory. They targeted this device at students and businessmen at a price of about $399. Fossil released the Wrist PDA, a watch that ran Palm OS and contained 8 MB of RAM and 4 MB of flash memory and featured an integrated stylus and a resolution of 160x160. It was criticized for its weight of 108 grams and was discontinued in 2005.
In early 2004, released the SPOT smartwatch. The company demonstrated it working with coffee makers, weather stations and clocks with SPOT tech. The smartwatch had information like weather, news, stocks, and sports scores transmitted through FM waves. You had to buy a subscription that cost from $39 to $59. Sony Ericsson launched the Sony Ericsson LiveView, a wearable watch device which is an external BT display for an Android Smartphone. Pebble is an innovative smartwatch that raised the most money on Kickstarter reaching 10.3 million dollars between April 12 and May 18. This watch had a 32 millimeter 144x168 pixel black and white memory LCD manufactured by Sharp with a backlight, a vibrating motor, a magnetometer, an ambient light sensor, and a three-axis accelerometer. It can communicate with an Android or iOS device using both BT 2.1 and BT 4.0 using Stonestreet One's Bluetopia+MFI software stack. As of July 2013 companies that were making smartwatches or were involved in smartwatch developments are: Acer, Apple, BlackBerry, Foxconn, Google, LG, Microsoft, Qualcomm, Samsung, Sony, VESAG and Toshiba. Some notable ones from this list are HP, HTC, Lenovo and Nokia. Many smartwatches were released at CES 2014. The model featured a curved AMOLED display and a built-in 3G modem. On September 9, 2014, Apple Inc. announced its first smartwatch named the Apple Watch and released early 2015. Microsoft released Microsoft Band, a smart fitness tracker and their first watch since SPOT in early 2004. Top watches at CES 2017 were the Garmin Fenix 5 and the Casio WSD F20. Apple Watch Series 3 had built-in LTE allowing phone calls and messaging and data without a nearby phone connection. During a September 2018 keynote, Apple introduced an Apple Watch Series 4. It had a larger display and an EKG feature to detect abnormal heart function. Qualcomm released their Snapdragon 3100 chip the same month. It is a successor to the Wear 2100 with power efficiency and a separate low power core that can run basic watch functions as well as slightly more advanced functions such as step tracking.
See also
Patek Philippe
Breitling
Fortis Uhren AG
IWC
Longines
Raketa
History of timekeeping devices
Zeno-Watch Basel
Horology
References
Further reading
Thompson, David, The History of Watches, New York: Abbeville Press, 2008.
External links
Functioning of a simple mechanical watch
Pictures and overview of the earliest watches
Peter Henlein: Pomander Watch Anno 1505
First American Colonial Watch
Watches
History of measurement |
26561301 | https://en.wikipedia.org/wiki/Legacy-free%20PC | Legacy-free PC | A legacy-free PC is a type of personal computer that lacks a floppy and/or optical disc drive, legacy ports, and an Industry Standard Architecture (ISA) bus (or sometimes, any internal expansion bus at all). According to Microsoft, "The basic goal for these requirements is that the operating system, devices, and end users cannot detect the presence of the following: ISA slots or devices; legacy floppy disk controller (FDC); and PS/2, serial, parallel, and game ports." The legacy ports are usually replaced with Universal Serial Bus (USB) ports. A USB adapter may be used if an older device must be connected to a PC lacking these ports. According to the 2001 edition of Microsoft's PC System Design Guide, a legacy-free PC must be able to boot from a USB device.
Removing older, usually more bulky ports and devices allows a legacy-free PC to be much more compact than earlier systems and many fall into the nettop or All in One form factor. Netbooks and Ultrabooks could also be considered a portable form of a legacy-free PC. Legacy-free PCs can be more difficult to upgrade than a traditional beige box PC, and are more typically expected to be replaced completely when they become obsolete. Many legacy-free PCs include modern devices that may be used to replace ones omitted, such as a memory card reader replacing the floppy drive.
As the first decade of the 21st century progressed, the legacy-free PC went mainstream, with legacy ports removed from commonly available computer systems in all form factors. However, the PS/2 keyboard connector still retains some use, as it can offer some uses (e.g. implementation of n-key rollover) not offered by USB.
With those parts becoming increasingly rare on newer computers as of late 2010s and early 2020s, the term "legacy-free PC" itself have also become increasingly rare.
History
Late 1980s
In 1987 was released the IBM PS/2 line with new internal architecture; the BIOS and the new PS/2 port and VGA port was introduced, but this line was heavily criticized for a relatively high-closed proprietary architecture and low compatibility with PC-cloned hardware.
The first known as "notebook" computer — the released in 1988 NEC UltraLite laptop, with omitted integrated floppy and with limited internal storage, also can be described as Legacy-free machine.
1990s
In 1998 was introduced the Apple's iMac G3 - this was the first widely known example of a class and drawing much criticism for its lack of legacy peripherals such as a floppy drive and Apple Desktop Bus (ADB) connector; However, its success popularized USB ports.
Compaq released the iPaq desktop in 1999.
From November 1999 to July 2000, Dell's WebPC was an early less-successful Wintel legacy-free PC.
2000s
More legacy-free PCs were introduced around 2000 after the prevalence of USB and broadband internet made many of the older ports and devices obsolete. They largely took the form of low-end, consumer systems with the motivation of making computers less expensive, easier to use, and more stable and manageable. The Dell Studio Hybrid, Asus Eee Box and MSI Wind PC are examples of later, more-successful Intel-based legacy-free PCs.
Apple introduced the Apple Modem on October 12, 2005 and removed the internal 56K modem on new computers. The MacBook Air, introduced on January 29, 2008, also omits a built-in SuperDrive and wired Ethernet connectivity that was available on all other Mac computers sold at the time. The SuperDrive would later be removed from all Macs by the end of 2016, while wired Ethernet would later be removed from all MacBook models. These removals are followed by other PC manufacturers who ship lightweight laptops.
PGA packaging of CPUs, and their complementary socket on motherboards, are gradually replaced by LGA starting from 2000s.
2010s
Northbridge, southbridge, and FSB have been replaced by more integrated architectures starting from early 2010s.
The relaunched MacBook in 2015 dropped features such as the MagSafe charging port and the Secure Digital (SD) memory card reader. It only kept two types of ports: a 3.5 mm audio jack and a USB 3.1 Type-C port. This configuration later found its way in the MacBook Pro in 2016, the only difference being that two or four Thunderbolt 3 ports were included instead of just one. In addition, all MacBook Pro except for the entry-level model replaced the function keys with a Touch Bar. These changes led to criticism because many users used the features that Apple had removed, yet this approach have been copied to various degree by some other laptop vendors.
The BIOS is now legacy, replaced by UEFI. PCI has fallen out of favor, as it has been superseded by PCIe
See also
Nettop
Netbook
PC 2001
WebPC
iPAQ (desktop computer)
Network computer
Thin client
Legacy system
References
Cloud clients
Information appliances
Personal computers
Classes of computers
Legacy hardware |
6014 | https://en.wikipedia.org/wiki/Cathode-ray%20tube | Cathode-ray tube | A cathode-ray tube (CRT) is a vacuum tube containing one or more electron guns, the beams of which are manipulated to display images on a phosphorescent screen. The images may represent electrical waveforms (oscilloscope), pictures (television set, computer monitor), radar targets, or other phenomena. A CRT on a television set is commonly called a picture tube. CRTs have also been used as memory devices, in which case the screen is not intended to be visible to an observer.
In television sets and computer monitors, the entire front area of the tube is scanned repeatedly and systematically in a fixed pattern called a raster. In color devices, an image is produced by controlling the intensity of each of three electron beams, one for each additive primary color (red, green, and blue) with a video signal as a reference. In modern CRT monitors and televisions the beams are bent by magnetic deflection, using a deflection yoke. Electrostatic deflection is commonly used in oscilloscopes.
A CRT is a glass envelope which is deep (i.e., long from front screen face to rear end), heavy, and fragile. The interior is evacuated to approximately to , to facilitate the free flight of electrons from the gun(s) to the tube's face without scattering due to collisions with air molecules. As such, handling a CRT carries the risk of violent implosion that can hurl glass at great velocity. The face is typically made of thick lead glass or special barium-strontium glass to be shatter-resistant and to block most X-ray emissions. CRTs make up most of the weight of CRT TVs and computer monitors.
Since the early 2010s, CRTs have been superseded by flat-panel display technologies such as LCD, plasma display, and OLED displays which are cheaper to manufacture and run, as well as significantly lighter and less bulky. Flat-panel displays can also be made in very large sizes whereas to was about the largest size of a CRT.
A CRT works by electrically heating a tungsten coil which in turn heats a cathode in the rear of the CRT, causing it to emit electrons which are modulated and focused by electrodes. The electrons are steered by deflection coils or plates, and an anode accelerates them towards the phosphor-coated screen, which generates light when hit by the electrons.
Before the invention of the integrated circuit, CRTs were thought of as the most complicated consumer electronics product.
History
Cathode rays were discovered by Julius Plücker and Johann Wilhelm Hittorf. Hittorf observed that some unknown rays were emitted from the cathode (negative electrode) which could cast shadows on the glowing wall of the tube, indicating the rays were traveling in straight lines. In 1890, Arthur Schuster demonstrated cathode rays could be deflected by electric fields, and William Crookes showed they could be deflected by magnetic fields. In 1897, J. J. Thomson succeeded in measuring the charge-mass-ratio of cathode rays, showing that they consisted of negatively charged particles smaller than atoms, the first "subatomic particles", which had already been named electrons by Irish physicist George Johnstone Stoney in 1891. The earliest version of the CRT was known as the "Braun tube", invented by the German physicist Ferdinand Braun in 1897.
It was a cold-cathode diode, a modification of the Crookes tube with a phosphor-coated screen. Braun was the first to conceive the use of a CRT as a display device.
In 1908, Alan Archibald Campbell-Swinton, fellow of the Royal Society (UK), published a letter in the scientific journal Nature, in which he described how "distant electric vision" could be achieved by using a cathode-ray tube (or "Braun" tube) as both a transmitting and receiving device.
He expanded on his vision in a speech given in London in 1911 and reported in The Times and the Journal of the Röntgen Society.
The first cathode-ray tube to use a hot cathode was developed by John Bertrand Johnson (who gave his name to the term Johnson noise) and Harry Weiner Weinhart of Western Electric, and became a commercial product in 1922. The introduction of hot cathodes allowed for lower acceleration anode voltages and higher electron beam currents, since the anode now only accelerated the electrons emitted by the hot cathode, and no longer had to have a very high voltage to induce electron emission from the cold cathode.
In 1926, Kenjiro Takayanagi demonstrated a CRT television that received images with a 40-line resolution. By 1927, he improved the resolution to 100 lines, which was unrivaled until 1931. By 1928, he was the first to transmit human faces in half-tones on a CRT display. By 1935, he had invented an early all-electronic CRT television.
In 1927, Philo Farnsworth created a television prototype.
The CRT was named in 1929 by inventor Vladimir K. Zworykin, who was influenced by Takayanagi's earlier work. RCA was granted a trademark for the term (for its cathode-ray tube) in 1932; it voluntarily released the term to the public domain in 1950.
In the 1930s, Allen B. DuMont made the first CRTs to last 1,000 hours of use, which was one of the factors that led to the widespread adoption of television.
The first commercially made electronic television sets with cathode-ray tubes were manufactured by Telefunken in Germany in 1934.
In 1947, the cathode-ray tube amusement device, the earliest known interactive electronic game as well as the first to incorporate a cathode-ray tube screen, was created.
From 1949 to the early 1960s, there was a shift from circular CRTs to rectangular CRTs, although the first rectangular CRTs were made in 1938 by Telefunken. While circular CRTs were the norm, European TV sets often blocked portions of the screen to make it appear somewhat rectangular while American sets often left the entire front of the CRT exposed or only blocked the upper and lower portions of the CRT.
In 1954, RCA produced some of the first color CRTs, the 15GP22 CRTs used in the CT-100, the first color TV set to be
mass produced. The first rectangular color CRTs were also made in 1954. However, the first rectangular color CRTs to be offered to the public were made in 1963. One of the challenges that had to be solved to produce the rectangular color CRT was convergence at the corners of the CRT. In 1965, brighter rare earth phosphors began replacing dimmer and cadmium-containing red and green phosphors. Eventually blue phosphors were replaced as well.
The size of CRTs increased over time, from 20 inches in 1938, to 21 inches in 1955, 35 inches by 1985, and 43 inches by 1989. However, experimental 31 inch CRTs were made as far back as 1938.
In 1960, the Aiken tube was invented. It was a CRT in a flat-panel display format with a single electron gun. Deflection was electrostatic and magnetic, but due to patent problems, it was never put into production. It was also envisioned as a head-up display in aircraft. By the time patent issues were solved, RCA had already invested heavily in conventional CRTs.
1968 marks the release of Sony Trinitron brand with the model KV-1310, which was based on Aperture Grille technology. It was acclaimed to have improved the output brightness. The Trinitron screen was identical with its upright cylindrical shape due to its unique triple cathode single gun construction.
In 1987, flat-screen CRTs were developed by Zenith for computer monitors, reducing reflections and helping increase image contrast and brightness. Such CRTs were expensive, which limited their use to computer monitors. Attempts were made to produce flat-screen CRTs using inexpensive and widely available float glass.
In 1990, the first CRTs with HD resolution were released to the market by Sony.
In the mid-1990s, some 160 million CRTs were made per year.
Flat-panel displays dropped in price and started significantly displacing cathode-ray tubes in the 2000s. After several predictions, LCD monitor sales began exceeding those of CRTs in 2003–2004 and LCD TV sales started exceeding those of CRTs in the US in 2005, in Japan in 2005–2006, in Europe in 2006, globally in 2007–2008, and in India in 2013.
In the mid-2000s, Canon and Sony presented the surface-conduction electron-emitter display and field-emission displays, respectively. They both were flat-panel displays that had one (SED) or several (FED) electron emitters per subpixel in place of electron guns. The electron emitters were placed on a sheet of glass and the electrons were accelerated to a nearby sheet of glass with phosphors using an anode voltage. The electrons were not focused, making each subpixel essentially a flood beam CRT. They were never put into mass production as LCD technology was significantly cheaper, eliminating the market for such displays.
The last known manufacturer of (in this case, recycled) CRTs, Videocon, ceased in 2015. CRT TVs stopped being made around the same time.
In 2015, several CRT manufacturers were convicted in the US for price fixing. The same occurred in Canada in 2018.
Demise
Worldwide sales of CRT computer monitors peaked in 2000, at 90 million units, while those of CRT TVs peaked in 2005 at 130 million units.
Beginning in the late 90s to the early 2000s, CRTs began to be replaced with LCDs, starting first with computer monitors smaller than 15 inches in size, largely because of their lower bulk. Among the first manufacturers to stop CRT production was Hitachi in 2001, followed by Sony in Japan in 2004, Thomson in the US in 2004, Matsushita Toshiba picture display in 2005 in the US, 2006 in Malaysia and 2007 in China, Sony in the US in 2006, Sony in Singapore and Malaysia for the Latin American and Asian markets in 2008, Samsung SDI in 2007 and 2012 and Cathode Ray Technology (formerly Philips) in 2012 and Videocon in 2015–16. Ekranas in Lithuania and LG.Philips Displays went bankrupt in 2005 and 2006, respectively. Matsushita Toshiba stopped in the US in 2004 due to losses of $109 million, and in Malaysia in 2006 due to losses that almost equaled their sales. The last CRT TVs at CES were shown by Samsung in 2007 and the last mass produced model was introduced by LG in 2008 for developing markets due to its low price. The last CRT TV by a major manufacturer was introduced by LG in 2010.
CRTs were first replaced by LCD in first world countries such as Japan and Europe in the 2000s and continued to be popular in third world countries such as Latin America, China, Asia and the Middle East due to their low price compared to contemporary flat panel TVs, and later in markets like rural India. However, in around 2014, even rural markets started favoring LCD over CRT, leading to the demise of the technology.
Despite being a mainstay of display technology for decades, CRT-based computer monitors and televisions are now virtually a dead technology. Demand for CRT screens dropped in the late 2000s. The rapid advances and falling prices of LCD flat panel technology — first for computer monitors, and then for televisions — spelled doom for competing display technologies such as CRT, rear-projection, and plasma display. Despite efforts from Samsung and LG to make CRTs competitive with their LCD and plasma counterparts, offering slimmer and cheaper models to compete with similarly sized and more expensive LCDs, CRTs eventually became obsolete and were relegated to developing markets once LCDs fell in price, with their lower bulk, weight and ability to be wall mounted coming as pluses.
Most high-end CRT production had ceased by around 2010, including high-end Sony and Panasonic product lines. In Canada and the United States, the sale and production of high-end CRT TVs ( screens) in these markets had all but ended by 2007. Just a couple of years later, inexpensive "combo" CRT TVs ( screens with an integrated VHS player) disappeared from discount stores.
Electronics retailers such as Best Buy steadily reduced store spaces for CRTs. In 2005, Sony announced that they would stop the production of CRT computer displays. Samsung did not introduce any CRT models for the 2008 model year at the 2008 Consumer Electronics Show; on 4 February 2008, they removed their 30" wide screen CRTs from their North American website and did not replace them with new models.
In the United Kingdom, DSG (Dixons), the largest retailer of domestic electronic equipment, reported that CRT models made up 80–90% of the volume of televisions sold at Christmas 2004, and 15–20% just a year later, and that they were expected to be less than 5% at the end of 2006. Dixons ceased selling CRT televisions in 2006.
The demise of CRTs has made maintaining arcade machines made before the wide adoption of flat-panel displays difficult, due to a lack of spare replacement CRTs. (CRTs may need replacement due to wear as explained further below.) Repairing CRTs, although possible, requires a high level of skill.
Current uses
While CRTs had declined dramatically in the late 2000s, they are still widely used by consumers and some industries. CRTs do have some distinct advantages over other newer technologies.
Because a CRT does not need to draw a full image and instead uses interlaced lines, a CRT is faster than an LCD which draws the entire image. CRTs are also able to correctly display certain resolutions, such as the 256x224 resolution of the Nintendo Entertainment System (NES). This is also an example of the most common usage of CRTs by consumers, retro video gaming. Some reasons for this include:
CRTs are able to correctly display the often “oddball” resolutions that many older consoles use.
CRTs have the best quality when watching analog programming such as on VHS or through an RF signal.
Some industries still use CRTs because it is either too much effort, downtime, and/or cost to replace them, or there is no substitute available; a notable example is the airline industry. Planes such as the Boeing 747-400 and the Airbus A320 used CRT instruments in their glass cockpits instead of mechanical instruments. Airlines such as Lufthansa still use CRT technology, which also uses floppy disks for navigation updates.
CRTs also tend to be more durable than their flat panel counterparts, though specialised LCDs that have similar durability also exist.
Comparison with other technologies
LCD advantages over CRT: Lower bulk, power consumption and heat generation, higher refresh rates (up to 360hz), higher contrast ratios
CRT advantages over LCD: Better color reproduction, no motion blur, multisyncing available in many monitors, no input lag
OLED advantages over CRT: Lower bulk, similar color reproduction, higher contrast ratios, similar refesh rates (over 60 Hz, up to 120 Hz) but not on computer monitors, also suffers from motion blur
On CRTs, refresh rate depends on resolution, both of which are ultimately limited by the maximum horizontal scanning frequency of the CRT. Motion blur also depends on the decay time of the phosphors. Phosphors that decay too slowly for a given refresh rate may cause smearing or motion blur on the image. In practice, CRTs are limited to a refresh rate of 160 Hz. LCDs that can compete with OLED (Dual Layer, and mini-LED LCDs) are not available in high refresh rates, although quantum dot LCDs (QLEDs) are available in high refresh rates (up to 144 Hz) and are competitive in color reproduction with OLEDs.
CRT monitors can still outperform LCD and OLED monitors in input lag, as there is no signal processing between the CRT and the display connector of the monitor, since CRT monitors often use VGA which provides an analog signal that can be fed to a CRT directly. Video cards designed for use with CRTs may have a RAMDAC to generate the analog signals needed by the CRT. Also, CRT monitors are often capable of displaying sharp images at several resolutions, an ability known as multisyncing. Due to these reasons, CRTs are sometimes preferred by PC gamers in spite of their bulk, weight and heat generation.
Construction
Body
The body of a CRT is usually made up of three parts: A screen/faceplate/panel, a cone/funnel, and a neck. The joined screen, funnel and neck are known as the bulb or envelope.
The neck is made from a glass tube while the funnel and screen are made by pouring and then pressing glass into a mold. The glass, known as CRT glass or TV glass, needs special properties to shield against x-rays while providing adequate light transmission in the screen or being very electrically insulating in the funnel and neck. The formulation that gives the glass its properties is also known as the melt. The glass is of very high quality, being almost contaminant and defect free. Most of the costs associated with glass production come from the energy used to melt the raw materials into glass. Glass furnaces for CRT glass production have several taps to allow molds to be replaced without stopping the furnace, to allow production of CRTs of several sizes. Only the glass used on the screen needs to have precise optical properties. The optical properties of the glass used on the screen affects color reproduction and purity in Color CRTs. Transmittance, or how transparent the glass is, may be adjusted to be more transparent to certain colors (wavelengths) of light. Transmittance is measured at the center of the screen with a 546nm wavelength light, and a 10.16mm thick screen. Transmittance goes down with increasing thickness. Standard transmittances for Color CRT screens are 86%, 73%, 57%, 46%, 42% and 30%. Lower transmittances are used to improve image contrast but they put more stress on the electron gun, requiring more power on the electron gun for a higher electron beam power to light the phosphors more brightly to compensate for the reduced transmittance. The transmittance must be uniform across the screen to ensure color purity. The radius (curvature) of screens has increased (grown less curved) over time, from 30 to 68 inches, ultimately evolving into completely flat screens, reducing reflections. The thickness of both curved and flat screens gradually increases from the center outwards, and with it, transmittance is gradually reduced. This means that flat-screen CRTs may not be completely flat on the inside. The glass used in CRTs arrives from the glass factory to the CRT factory as either separate screens and funnels with fused necks, for Color CRTs, or as bulbs made up of a fused screen, funnel and neck. There were several glass formulations for different types of CRTs, that were classified using codes specific to each glass manufacturer. The compositions of the melts were also specific to each manufacturer. Those optimized for high color purity and contrast were doped with Neodymium, while those for monochrome CRTs were tinted to differing levels, depending on the formulation used and had transmittances of 42% or 30%. Purity is ensuring that the correct colors are activated (for example, ensuring that red is displayed uniformly across the screen) while convergence ensures that images are not distorted. Convergence may be modified using a cross hatch pattern.
CRT glass used to be made by dedicated companies such as AGC Inc., O-I Glass, Samsung Corning Precision Materials, Corning Inc., and Nippon Electric Glass; others such as Videocon, Sony for the US market and Thomson made their own glass.
The funnel and the neck are made of leaded potash-soda glass or lead silicate glass formulation to shield against x-rays generated by high voltage electrons as they decelerate after striking a target, such as the phosphor screen or shadow mask of a color CRT. The velocity of the electrons depends on the anode voltage of the CRT; the higher the voltage, the higher the speed. The amount of x-rays emitted by a CRT can also lowered by reducing the brightness of the image. Leaded glass is used because it is inexpensive, while also shielding heavily against x-rays, although some funnels may also contain barium. The screen is usually instead made out of a special lead-free silicate glass formulation with barium and strontium to shield against x-rays. Another glass formulation uses 2-3% of lead on the screen. Monochrome CRTs may have a tinted barium-lead glass formulation in both the screen and funnel, with a potash-soda lead glass in the neck; the potash-soda and barium-lead formulations have different thermal expansion coefficients. The glass used in the neck must be an excellent electrical insulator to contain the voltages used in the electron optics of the electron gun, such as focusing lenses. The lead in the glass causes it to brown (darken) with use due to x-rays, usually the CRT cathode wears out due to cathode poisoning before browning becomes apparent. The glass formulation determines the highest possible anode voltage and hence the maximum possible CRT screen size. For color, maximum voltages are often 24 to 32 kV, while for monochrome it is usually 21 or 24.5 kV, limiting the size of monochrome CRTs to 21 inches, or approx. 1 kV per inch. The voltage needed depends on the size and type of CRT. Since the formulations are different, they must be compatible with one another, having similar thermal expansion coefficients. The screen may also have an anti-glare or anti-reflective coating, or be ground to prevent reflections. CRTs may also have an anti-static coating.
The leaded glass in the funnels of CRTs may contain 21 to 25% of lead oxide (PbO), The neck may contain 30 to 40% of lead oxide, and the screen may contain 12% of barium oxide, and 12% of strontium oxide. A typical CRT contains several kilograms of lead as lead oxide in the glass depending on its size; 12 inch CRTs contain 0.5 kg of lead in total while 32 inch CRTs contain up to 3 kg. Strontium oxide began being used in CRTs, its major application, in the 1970s.
Some early CRTs used a metal funnel insulated with polyethylene instead of glass with conductive material. Others had ceramic or blown pyrex instead of pressed glass funnels. Early CRTs did not have a dedicated anode cap connection; the funnel was the anode connection, so it was live during operation.
The funnel is coated on the inside and outside with a conductive coating, making the funnel a capacitor, helping stabilize and filter the anode voltage of the CRT, and significantly reducing the amount of time needed to turn on a CRT. The stability provided by the coating solved problems inherent to early power supply designs, as they used vacuum tubes. Because the funnel is used as a capacitor, the glass used in the funnel must be an excellent electrical insulator (dielectric). The inner coating has a positive voltage (the anode voltage that can be several kV) while the outer coating is connected to ground. CRTs powered by more modern power supplies do not need to be connected to ground, due to the more robust design of modern power supplies. The value of the capacitor formed by the funnel is .005-.01uF, although at the voltage the anode is normally supplied with. The capacitor formed by the funnel can also suffer from dielectric absorption, similarly to other types of capacitors. Because of this CRTs have to be discharged before handling to prevent injury.
The depth of a CRT is related to its screen size. Usual deflection angles were 90° for computer monitor CRTs and small CRTs and 110° which was the standard in larger TV CRTs, with 120 or 125° being used in slim CRTs made since 2001–2005 in an attempt to compete with LCD TVs.
Over time, deflection angles increased as they became practical, from 50° in 1938 to 110° in 1959, and 125° in the 2000s. 140° deflection CRTs were researched but never commercialized, as convergence problems were never resolved.
Size and weight
The size of the screen of a CRT is measured in two ways: the size of the screen or the face diagonal, and the viewable image size/area or viewable screen diagonal, which is the part of the screen with phosphor. The size of the screen is the viewable image size plus its black edges which are not coated with phosphor. The viewable image may be perfectly square or rectangular while the edges of the CRT are black and have a curvature (such as in black stripe CRTs) or the edges may be black and truly flat (such as in Flatron CRTs), or the edges of the image may follow the curvature of the edges of the CRT, which may be the case in CRTs without and with black edges and curved edges. Black stripe CRTs were first made by Toshiba in 1972.
Small CRTs below 3 inches were made for handheld televisions such as the MTV-1 and viewfinders in camcorders. In these, there may be no black edges, that are however truly flat.
Most of the weight of a CRT comes from the thick glass screen, which comprises 65% of the total weight of a CRT. The funnel and neck glass comprise the remaining 30% and 5% respectively. The glass in the funnel is thinner than on the screen. Chemically or thermally tempered glass may be used to reduce the weight of the CRT glass.
Anode
The outer conductive coating is connected to ground while the inner conductive coating is connected using the anode button/cap through a series of capacitors and diodes (a Cockcroft–Walton generator) to the high voltage flyback transformer; the inner coating is the anode of the CRT, which, together with an electrode in the electron gun, is also known as the final anode. The inner coating is connected to the electrode using springs. The electrode forms part of a bipotential lens. The capacitors and diodes serve as a voltage multiplier for the current delivered by the flyback.
For the inner funnel coating, monochrome CRTs use aluminum while color CRTs use aquadag; Some CRTs may use iron oxide on the inside. On the outside, most CRTs (but not all) use aquadag. Aquadag is an electrically conductive graphite-based paint. In color CRTs, the aquadag is sprayed onto the interior of the funnel whereas historically aquadag was painted into the interior of monochrome CRTs.
The anode is used to accelerate the electrons towards the screen and also collects the secondary electrons that are emitted by the phosphor particles in the vacuum of the CRT.
The anode cap connection in modern CRTs must be able to handle up to 55–60 kV depending on the size and brightness of the CRT. Higher voltages allow for larger CRTs, higher image brightness, or a tradeoff between the two. It consists of a metal clip that expands on the inside of an anode button that is embedded on the funnel glass of the CRT. The connection is insulated by a silicone suction cup, possibly also using silicone grease to prevent corona discharge.
The anode button must be specially shaped to establish a hermetic seal between the button and funnel. X-rays may leak through the anode button, although that may not be the case in newer CRTs starting from the late 1970s to early 1980s, thanks to a new button and clip design. The button may consist of a set of 3 nested cups, with the outermost cup being made of a Nickel–Chromium–Iron alloy containing 40 to 49% of Nickel and 3 to 6% of Chromium to make the button easy to fuse to the funnel glass, with a first inner cup made of thick inexpensive iron to shield against x-rays, and with the second innermost cup also being made of iron or any other electrically conductive metal to connect to the clip. The cups must be heat resistant enough and have similar thermal expansion coefficients similar to that of the funnel glass to withstand being fused to the funnel glass. The inner side of the button is connected to the inner conductive coating of the CRT. The anode button may be attached to the funnel while its being pressed into shape in a mold. Alternatively, the x-ray shielding may instead be built into the clip.
The flyback transformer is also known as an IHVT (Integrated High Voltage Transformer) if it includes a voltage multiplier. The flyback uses a ceramic or powdered iron core to enable efficient operation at high frequencies. The flyback contains one primary and many secondary windings that provide several different voltages. The main secondary winding supplies the voltage multiplier with voltage pulses to ultimately supply the CRT with the high anode voltage it uses, while the remaining windings supply the CRT's filament voltage, keying pulses, focus voltage and voltages derived from the scan raster. When the transformer is turned off, the flyback's magnetic field quickly collapses which induces high voltage in its windings. The speed at which the magnetic field collapses determines the voltage that is induced, so the voltage increases alongside its speed. A capacitor (Retrace Timing Capacitor) or series of capacitors (to provide redundancy) is used to slow the collapse of the magnetic field.
The design of the high voltage power supply in a product using a CRT has an influence in the amount of x-rays emitted by the CRT. The amount of emitted x-rays increases with both higher voltages and currents. If the product such as a TV set uses an unregulated high voltage power supply, meaning that anode and focus voltage go down with increasing electron current when displaying a bright image, the amount of emitted x-rays is as its highest when the CRT is displaying a moderately bright images, since when displaying dark or bright images, the higher anode voltage counteracts the lower electron beam current and vice versa respectively. The high voltage regulator and rectifier vacuum tubes in some old CRT TV sets may also emit x-rays.
Electron gun
The electron gun emits the electrons that ultimately hit the phosphors on the screen of the CRT. The electron gun contains a heater, which heats a cathode, which generates electrons that, using grids, are focused and ultimately accelerated into the screen of the CRT. The acceleration occurs in conjunction with the inner aluminum or aquadag coating of the CRT. The electron gun is positioned so that it aims at the center of the screen. It is inside the neck of the CRT, and it is held together and mounted to the neck using glass beads or glass support rods, which are the glass strips on the electron gun. The electron gun is made separately and then placed inside the neck through a process called "winding", or sealing. The electron gun has a glass wafer that is fused to the neck of the CRT. The connections to the electron gun penetrate the glass wafer. Once the electron gun is inside the neck, its metal parts (grids) are arced between each other using high voltage to smooth any rough edges in a process called spot knocking, to prevent the rough edges in the grids from generating secondary electrons.
Construction and method of operation
It has a hot cathode that is heated by a tungsten filament heating element; the heater may draw 0.5 to 2 A of current depending on the CRT. The voltage applied to the heater can affect the life of the CRT. Heating the cathode energizes the electrons in it, aiding electron emission, while at the same time current is supplied to the cathode; typically anywhere from 140 mA at 1.5 V to 600 mA at 6.3 V. The cathode creates an electron cloud (emits electrons) whose electrons are extracted, accelerated and focused into an electron beam. Color CRTs have three cathodes: one for red, green and blue. The heater sits inside the cathode but doesn't touch it; the cathode has its own separate electrical connection. The cathode is coated onto a piece of nickel which provides the electrical connection and structural support; the heater sits inside this piece without touching it.
There are several shortcircuits that can occur in a CRT electron gun. One is a heater-to-cathode short, that causes the cathode to permanently emit electrons which may cause an image with a bright red, green or blue tint with retrace lines, depending on the cathode (s) affected. Alternatively, the cathode may short to the control grid, possibly causing similar effects, or, the control grid and screen grid (G2) can short causing a very dark image or no image at all. The cathode may be surrounded by a shield to prevent sputtering.
The cathode is made of barium oxide that must be activated by heating to enable it to release electrons. Activation is necessary because barium oxide is not stable in air, so it is applied to the cathode as barium carbonate, which cannot emit electrons. Activation heats the barium carbonate to decompose it into barium oxide and carbon dioxide while forming a thin layer of metallic barium on the cathode. Activation occurs during evacuation of (at the same time a vacuum is formed in) the CRT. After activation the oxide can become damaged by several common gases such as water vapor, carbon dioxide, and oxygen. Alternatively, barium strontium calcium carbonate may be used instead of barium carbonate, yielding barium, strontium and calcium oxides after activation. During operation, the barium oxide is heated to 800-1000°C, at which point it starts shedding electrons.
Since it is a hot cathode, it is prone to cathode poisoning, which is the formation of a positive ion layer that prevents the cathode from emitting electrons, reducing image brightness significantly or completely and causing focus and intensity to be affected by the frequency of the video signal preventing detailed images from being displayed by the CRT. The positive ions come from leftover air molecules inside the CRT or from the cathode itself that react over time with the surface of the hot cathode. Reducing metals such as manganese, zirconium, magnesium, aluminum or titanium may be added to the piece of nickel to lengthen the life of the cathode, as during activation, the reducing metals diffuse into the barium oxide, improving its lifespan, especially at high electron beam currents. In color CRTs with red, green and blue cathodes, one or more cathodes may be affected independently of the others, causing total or partial loss of one or more colors. CRTs can wear or burn out due to cathode poisoning. Cathode poisoning is accelerated by increased cathode current (overdriving). In color CRTs, since there are three cathodes, one for red, green and blue, a single or more poisoned cathode may cause the partial or complete loss of one or more colors, tinting the image. The layer may also act as a capacitor in series with the cathode, inducing thermal lag. The cathode may instead be made of scandium oxide or incorporate it as a dopant, to delay cathode poisoning, extending the life of the cathode by up to 15%.
The amount of electrons generated by the cathodes is related to their surface area. A cathode with more surface area creates more electrons, in a larger electron cloud, which makes focusing the electron cloud into an electron beam more difficult. Normally, only a part of the cathode emits electrons unless the CRT displays images with parts that are at full image brightness; only the parts at full brightness cause all of the cathode to emit electrons. The area of the cathode that emits electrons grows from the center outwards as brightness increases, so cathode wear may be uneven. When only the center of the cathode is worn, the CRT may light brightly those parts of images that have full image brightness but not show darker parts of images at all, in such a case the CRT displays a poor gamma characteristic.
The second (screen) grid of the gun (G2) accelerates the electrons towards the screen using several hundred DC volts. A negative current is applied to the first (control) grid (G1) to converge the electron beam. G1 in practice is a Wehnelt cylinder. The brightness of the screen is not controlled by varying the anode voltage nor the electron beam current (they are never varied) despite them having an influence on image brightness, rather image brightness is controlled by varying the difference in voltage between the cathode and the G1 control grid. A third grid (G3) electrostatically focuses the electron beam before it is deflected and accelerated by the anode voltage onto the screen. Electrostatic focusing of the electron beam may be accomplished using an Einzel lens energized at up to 600 volts. Before electrostatic focusing, focusing the electron beam required a large, heavy and complex mechanical focusing system placed outside the electron gun.
However, electrostatic focusing cannot be accomplished near the final anode of the CRT due to its high voltage in the dozens of Kilovolts, so a high voltage (~600 to 8000 volt) electrode, together with an electrode at the final anode voltage of the CRT, may be used for focusing instead. Such an arrangement is called a bipotential lens, which also offers higher performance than an Einzel lens, or, focusing may be accomplished using a magnetic focusing coil together with a high anode voltage of dozens of kilovolts. However, magnetic focusing is expensive to implement, so it is rarely used in practice. Some CRTs may use two grids and lenses to focus the electron beam. The focus voltage is generated in the flyback using a subset of the flyback's high voltage winding in conjunction with a resistive voltage divider. The focus electrode is connected alongside the other connections that are in the neck of the CRT.
There is a voltage called cutoff voltage which is the voltage that creates black on the screen since it causes the image on the screen created by the electron beam to disappear, the voltage is applied to G1. In a color CRT with three guns, the guns have different cutoff voltages. Many CRTs share grid G1 and G2 across all three guns, increasing image brightness and simplifying adjustment since on such CRTs there is a single cutoff voltage for all three guns (since G1 is shared across all guns). but placing additional stress on the video amplifier used to feed video into the electron gun's cathodes, since the cutoff voltage becomes higher. Monochrome CRTs do not suffer from this problem. In monochrome CRTs video is fed to the gun by varying the voltage on the first control grid.
During retracing of the electron beam, the preamplifier that feeds the video amplifier is disabled and the video amplifier is biased to a voltage higher than the cutoff voltage to prevent retrace lines from showing, or G1 can have a large negative voltage applied to it to prevent electrons from getting out of the cathode. This is known as blanking. (see Vertical blanking interval and Horizontal blanking interval.) Incorrect biasing can lead to visible retrace lines on one or more colors, creating retrace lines that are tinted or white (for example, tinted red if the red color is affected, tinted magenta if the red and blue colors are affected, and white if all colors are affected). Alternatively, the amplifier may be driven by a video processor that also introduces an OSD (On Screen Display) into the video stream that is fed into the amplifier, using a fast blanking signal. TV sets and computer monitors that incorporate CRTs need a DC restoration circuit to provide a video signal to the CRT with a DC component, restoring the original brightness of different parts of the image.
The electron beam may be affected by the earth's magnetic field, causing it to normally enter the focusing lens off-center; this can be corrected using astigmation controls. Astigmation controls are both magnetic and electronic (dynamic); magnetic does most of the work while electronic is used for fine adjustments. One of the ends of the electron gun has a glass disk, the edges of which are fused with the edge of the neck of the CRT, possibly using frit; the metal leads that connect the electron gun to the outside pass through the disk.
Some electron guns have a quadrupole lens with dynamic focus to alter the shape and adjust the focus of the electron beam, varying the focus voltage depending on the position of the electron beam to maintain image sharpness across the entire screen, specially at the corners. They may also have a bleeder resistor to derive voltages for the grids from the final anode voltage.
After the CRTs were manufactured, they were aged to allow cathode emission to stabilize.
The electron guns in color CRTs are driven by a video amplifier which takes a signal per color channel and amplifies it to 40-170v per channel, to be fed into the electron gun's cathodes; each electron gun has its own channel (one per color) and all channels may be driven by the same amplifier, which internally has three separate channels. The amplifier's capabilities limit the resolution, refresh rate and contrast ratio of the CRT, as the amplifier needs to provide high bandwidth and voltage variations at the same time; higher resolutions and refresh rates need higher bandwidths (speed at which voltage can be varied and thus switching between black and white) and higher contrast ratios need higher voltage variations or amplitude for lower black and higher white levels. 30Mhz of bandwidth can usually provide 720p or 1080i resolution, while 20Mhz usually provides around 600 (horizontal, from top to bottom) lines of resolution, for example. The difference in voltage between the cathode and the control grid is what modulates the electron beam, modulating its current and thus the brightness of the image. The phosphors used in color CRTs produce different amounts of light for a given amount of energy, so to produce white on a color CRT, all three guns must output differing amounts of energy. The gun that outputs the most energy is the red gun since the red phosphor emits the least amount of light.
Gamma
CRTs have a pronounced triode characteristic, which results in significant gamma (a nonlinear relationship in an electron gun between applied video voltage and beam intensity).
Deflection
There are two types of deflection: magnetic and electrostatic. Magnetic is usually used in TVs and monitors as it allows for higher deflection angles (and hence shallower CRTs) and deflection power (which allows for higher electron beam current and hence brighter images) while avoiding the need for high voltages for deflection of up to 2000 volts, while oscilloscopes often use electrostatic deflection since the raw waveforms captured by the oscilloscope can be applied directly (after amplification) to the vertical electrostatic deflection plates inside the CRT.
Magnetic deflection
Those that use magnetic deflection may use a yoke that has two pairs of deflection coils; one pair for vertical, and another for horizontal deflection. The yoke can be bonded (be integral) or removable. Those that were bonded used glue or a plastic to bond the yoke to the area between the neck and the funnel of the CRT while those with removable yokes are clamped. The yoke generates heat whose removal is essential since the conductivity of glass goes up with increasing temperature, the glass needs to be insulating for the CRT to remain usable as a capacitor. The temperature of the glass below the yoke is thus checked during the design of a new yoke. The yoke contains the deflection and convergence coils with a ferrite core to reduce loss of magnetic force as well as the magnetized rings used to align or adjust the electron beams in color CRTs (The color purity and convergence rings, for example) and monochrome CRTs. The yoke may be connected using a connector, the order in which the deflection coils of the yoke are connected determines the orientation of the image displayed by the CRT. The deflection coils may be held in place using polyurethane glue.
The deflection coils are driven by sawtooth signals that may be delivered through VGA as horizontal and vertical sync signals. A CRT needs two deflection circuits: a horizontal and a vertical circuit, which are similar except that the horizontal circuit runs at a much higher frequency (a Horizontal scan rate) of 15 to 240 kHz depending on the refresh rate of the CRT and the number of horizontal lines to be drawn (the vertical resolution of the CRT). The higher frequency makes it more susceptible to interference, so an automatic frequency control (AFC) circuit may be used to lock the phase of the horizontal deflection signal to that of a sync signal, to prevent the image from becoming distorted diagonally. The vertical frequency varies according to the refresh rate of the CRT. So a CRT with a 60 Hz refresh rate has a vertical deflection circuit running at 60 Hz. The horizontal and vertical deflection signals may be generated using two circuits that work differently; the horizontal deflection signal may be generated using a voltage controlled oscillator (VCO) while the vertical signal may be generated using a triggered relaxation oscillator. In many TVs, the frequencies at which the deflection coils run is in part determined by the inductance value of the coils. CRTs had differing deflection angles; the higher the deflection angle, the shallower the CRT for a given screen size, but at the cost of more deflection power and lower optical performance.
Higher deflection power means more current is sent to the deflection coils to bend the electron beam at a higher angle, which in turn may generate more heat or require electronics that can handle the increased power. Heat is generated due to resistive and core losses. The deflection power is measured in mA per inch. The vertical deflection coils may require approximately 24 volts while the horizontal deflection coils require approx. 120 volts to operate.
The deflection coils are driven by deflection amplifiers. The horizontal deflection coils may also be driven in part by the horizontal output stage of a TV set. The stage contains a capacitor that is in series with the horizontal deflection coils that performs several functions, among them are: shaping the sawtooth deflection signal to match the curvature of the CRT and centering the image by preventing a DC bias from developing on the coil. At the beginning of retrace, the magnetic field of the coil collapses, causing the electron beam to return to the center of the screen, while at the same time the coil returns energy into capacitors, the energy of which is then used to force the electron beam to go to the left of the screen.
Due to the high frequency at which the horizontal deflection coils operate, the energy in the deflection coils must be recycled to reduce heat dissipation. Recycling is done by transferring the energy in the deflection coils' magnetic field to a set of capacitors. The voltage on the horizontal deflection coils is negative when the electron beam is on the left side of the screen and positive when the electron beam is on the right side of the screen. The energy required
for deflection is dependent on the energy of the electrons. Higher energy (voltage and/or current) electron beams need more energy to be deflected, and are used to achieve higher image brightness.
Electrostatic deflection
Mostly used in oscilloscopes. Deflection is carried out by applying a voltage across two pairs of plates, one for horizontal, and the other for vertical deflection. The electron beam is steered by varying the voltage difference across plates in a pair; For example, applying a voltage of 200 volts to the upper plate of the vertical deflection pair, while keeping the voltage in the bottom plate at 0 volts, will cause the electron beam to be deflected towards the upper part of the screen; increasing the voltage in the upper plate while keeping the bottom plate at 0 will cause the electron beam to be deflected to a higher point in the screen (will cause the beam to be deflected at a higher deflection angle). The same applies with the horizontal deflection plates. Increasing the length and proximity between plates in a pair can also increase the deflection angle.
Burn-in
Burn-in is when images are physically "burned" into the screen of the CRT; this occurs due to degradation of the phosphors due to prolonged electron bombardment of the phosphors, and happens when a fixed image or logo is left for too long on the screen, causing it to appear as a "ghost" image or, in severe cases, also when the CRT is off. To counter this, screensavers were used in computers to minimize burn-in. Burn-in is not exclusive to CRTs, as it also happens to plasma displays and OLED displays.
Evacuation
CRTs are evacuated or exhausted (a vacuum is formed) inside an oven at approx. 375–475 °C, in a process called baking or bake-out. The evacuation process also outgasses any materials inside the CRT, while decomposing others such as the polyvinyl alcohol used to apply the phosphors. The heating and cooling are done gradually to avoid inducing stress, stiffening and possibly cracking the glass; the oven heats the gases inside the CRT, increasing the speed of the gas molecules which increases the chances of them getting drawn out by the vacuum pump. The temperature of the CRT is kept to below that of the oven, and the oven starts to cool just after the CRT reaches 400 °C, or, the CRT was kept at a temperature higher than 400 °C for up to 15–55 minutes. The CRT was heated during or after evacuation, and the heat may have been used simultaneously to melt the frit in the CRT, joining the screen and funnel. The pump used is a turbomolecular pump or a diffusion pump. Formerly mercury vacuum pumps were also used. After baking, the CRT is disconnected ("sealed or tipped off") from the vacuum pump. The getter is then fired using an RF (induction) coil. The getter is usually in the funnel or in the neck of the CRT. The getter material which is often barium-based, catches any remaining gas particles as it evaporates due to heating induced by the RF coil (that may be combined with exothermic heating within the material); the vapor fills the CRT, trapping any gas molecules that it encounters and condenses on the inside of the CRT forming a layer that contains trapped gas molecules. Hydrogen may be present in the material to help distribute the barium vapor. The material is heated to temperatures above 1000 °C, causing it to evaporate. Partial loss of vacuum in a CRT can result in a hazy image, blue glowing in the neck of the CRT, flashovers, loss of cathode emission or focusing problems. The vacuum inside of a CRT causes atmospheric pressure to exert (in a 27-inch CRT) a pressure of in total.
Rebuilding
CRTs used to be rebuilt; repaired or refurbished. The rebuilding process included the dissassembly of the CRT, the disassembly and repair or replacement of the electron gun(s), the removal and redeposition of phosphors and aquadag, etc. Rebuilding was popular until the 1960s because CRTs were expensive and wore out quickly, making repair worth it. The last CRT rebuilder in the US closed in 2010, and the last in Europe, RACS, which was located in France, closed in 2013.
Reactivation
Also known as rejuvenation, the goal is to temporarily restore the brightness of a worn CRT. This is often done by carefully increasing the voltage on the cathode heater and the current and voltage on the control grids of the electron gun either manually or using a special device called a CRT rejuvenator. Some rejuvenators can also fix heater-to-cathode shorts by running a capacitive discharge through the short.
Phosphors
Phosphors in CRTs emit secondary electrons due to them being inside the vacuum of the CRT. The secondary electrons are collected by the anode of the CRT. Secondary electrons generated by phosphors need to be collected to prevent charges from developing in the screen, which would lead to reduced image brightness since the charge would repel the electron beam.
The phosphors used in CRTs often contain rare earth metals, replacing earlier dimmer phosphors. Early red and green phosphors contained Cadmium, and some black and white CRT phosphors also contained beryllium powder, although white phosphors containing cadmium, zinc and magnesium with silver, copper or manganese as dopants were also used. The rare earth phosphors used in CRTs are more efficient (produce more light) than earlier phosphors. The phosphors adhere to the screen because of Van der Waals and electrostatic forces. Phosphors composed of smaller particles adhere more strongly to the screen. The phosphors together with the carbon used to prevent light bleeding (in color CRTs) can be easily removed by scratching.
Several dozen types of phosphors were available for CRTs. Phosphors were classified according to color, persistence, luminance rise and fall curves, color depending on anode voltage (for phosphors used in penetration CRTs), Intended use, chemical composition, safety, sensitivity to burn-in, and secondary emission properties. Examples of rare earth phosphors are yittrium oxide for red and yittrium silicide for blue, while examples of earlier phosphors are copper cadmium sulfide for red,
SMPTE-C phosphors have properties defined by the SMPTE-C standard, which defines a color space of the same name. The standard prioritizes accurate color reproduction, which was made difficult by the different phosphors and color spaces used in the NTSC and PAL color systems. PAL TV sets have subjectively better color reproduction due to the use of saturated green phosphors, which have relatively long decay times that are tolerated in PAL since there is more time in PAL for phosphors to decay, due to its lower framerate. SMPTE-C phosphors were used in professional video monitors.
The phosphor coating on monochrome and color CRTs may have an aluminum coating on its rear side used to reflect light forward, provide protection against ions to prevent ion burn by negative ions on the phosphor, manage heat generated by electrons colliding against the phosphor, prevent static build up that could repel electrons from the screen, form part of the anode and collect the secondary electrons generated by the phosphors in the screen after being hit by the electron beam, providing the electrons with a return path. The electron beam passes through the aluminum coating before hitting the phosphors on the screen; the aluminum attenuates the electron beam voltage by about 1 kv. A film or lacquer may be applied to the phosphors to reduce the surface roughness of the surface formed by the phosphors to allow the aluminum coating to have a uniform surface and prevent it from touching the glass of the screen. This is known as filming. The lacquer contains solvents that are later evaporated; the lacquer may be chemically roughened to cause an aluminum coating with holes to be created to allow the solvents to escape.
Phosphor persistence
Various phosphors are available depending upon the needs of the measurement or display application. The brightness, color, and persistence of the illumination depends upon the type of phosphor used on the CRT screen. Phosphors are available with persistences ranging from less than one microsecond to several seconds. For visual observation of brief transient events, a long persistence phosphor may be desirable. For events which are fast and repetitive, or high frequency, a short-persistence phosphor is generally preferable. The phosphor persistence must be low enough to avoid smearing or ghosting artifacts at high refresh rates.
Limitations and workarounds
Blooming
Variations in anode voltage can lead to variations in brightness in parts or all of the image, in addition to blooming, shrinkage or the image getting zoomed in or out. Lower voltages lead to blooming and zooming in, while higher voltages do the opposite. Some blooming is unavoidable, which can be seen as bright areas of an image that expand, distorting or pushing aside surrounding darker areas of the same image. Blooming occurs because bright areas have a higher electron beam current from the electron gun, making the beam wider and harder to focus. Poor voltage regulation causes focus and anode voltage to go down with increasing electron beam current.
Doming
Doming is a phenomenon found on some CRT televisions in which parts of the shadow mask become heated. In televisions that exhibit this behavior, it tends to occur in high-contrast scenes in which there is a largely dark scene with one or more localized bright spots. As the electron beam hits the shadow mask in these areas it heats unevenly. The shadow mask warps due to the heat differences, which causes the electron gun to hit the wrong colored phosphors and incorrect colors to be displayed in the affected area. Thermal expansion causes the shadow mask to expand by around 100 microns.
During normal operation, the shadow mask is heated to around 80–90 °C. Bright areas of images heat the shadow mask more than dark areas, leading to uneven heating of the shadow mask and warping (blooming) due to thermal expansion caused by heating by increased electron beam current. The shadow mask is usually made of steel but it can be made of Invar (a low-thermal expansion Nickel-Iron alloy) as it withstands two to three times more current than conventional masks without noticeable warping, while making higher resolution CRTs easier to achieve. Coatings that dissipate heat may be applied on the shadow mask to limit blooming in a process called blackening.
Bimetal springs may be used in CRTs used in TVs to compensate for warping that occurs as the electron beam heats the shadow mask, causing thermal expansion. The shadow mask is installed to the screen using metal pieces or a rail or frame that is fused to the funnel or the screen glass respectively, holding the shadow mask in tension to minimize warping (if the mask is flat, used in flat-screen CRT computer monitors) and allowing for higher image brightness and contrast.
Aperture grille screens are brighter since they allow more electrons through, but they require support wires. They are also more resistant to warping. Color CRTs need higher anode voltages than monochrome CRTs to achieve the same brightness since the shadow mask blocks most of the electron beam. Slot masks and specially Aperture grilles don't block as many electrons resulting in a brighter image for a given anode voltage, but aperture grille CRTs are heavier. Shadow masks block 80–85% of the electron beam while Aperture grilles allow more electrons to pass through.
High voltage
Image brightness is related to the anode voltage and to the CRTs size, so higher voltages are needed for both larger screens and higher image brightness. Image brightness is also controlled by the current of the electron beam. Higher anode voltages and electron beam currents also mean higher amounts of x-rays and heat generation since the electrons have a higher speed and energy. Leaded glass and special barium-strontium glass are used to block most x-ray emissions.
Size
Size is limited by anode voltage, as it would require a higher dielectric strength to prevent arcing (corona discharge) and the electrical losses and ozone generation it causes, without sacrificing image brightness. The weight of the CRT, which originates from the thick glass needed to safely sustain a vacuum, imposes a practical limit on the size of a CRT. The 43-inch Sony PVM-4300 CRT monitor weighs . Smaller CRTs weigh significantly less, as an example, 32-inch CRTs weigh up to and 19-inch CRTs weigh up to . For comparison, a 32-inch flat panel TV only weighs approx. and a 19-inch flat panel TV weighs .
Shadow masks become more difficult to make with increasing resolution and size.
Limits imposed by deflection
At high deflection angles, resolutions and refresh rates (since higher resolutions and refresh rates require significantly higher frequencies to be applied to the horizontal deflection coils), the deflection yoke starts to produce large amounts of heat, due to the need to move the electron beam at a higher angle, which in turn requires exponentially larger amounts of power. As an example, to increase the deflection angle from 90 to 120°, power consumption of the yoke must also go up from 40 watts to 80 watts, and to increase it further from 120 to 150°, deflection power must again go up from 80 watts to 160 watts. This normally makes CRTs that go beyond certain deflection angles, resolutions and refresh rates impractical, since the coils would generate too much heat due to resistance caused by the skin effect, surface and eddy current losses, as well as hysterisis losses in the magnetic core, melting the insulation in the coils of the CRT and/or possibly causing the glass underneath the coil to become conductive (as the electrical conductivity of glass decreases with increasing temperature). Some deflection yokes are designed to dissipate the heat that comes from their operation. Higher deflection angles in color CRTs directly affect convergence at the corners of the screen which requires additional compensation circuitry to handle electron beam power and shape, leading to higher costs and power consumption. Higher deflection angles allow a CRT of a given size to be slimmer, however they also impose more stress on the CRT envelope, specially on the panel, the seal between the panel and funnel and on the funnel. The funnel needs to be long enough to minimize stress, as a longer funnel can be better shaped to have lower stress.
Types
CRTs were produced in two major categories, picture tubes and display tubes. Picture tubes were used in TVs while display tubes were used in computer monitors. Display tubes had no overscan and were of higher resolution.
Picture tube CRTs have overscan, meaning the actual edges of the image are not shown; this is deliberate to allow for adjustment variations between CRT TVs, preventing the ragged edges (due to blooming) of the image from being shown on screen. The shadow mask may have grooves that reflect away the electrons that do not hit the screen due to overscan. Color picture tubes used in TVs were also known as CPTs. CRTs are also sometimes called Braun tubes.
Monochrome CRTs
If the CRT is a black and white (B&W or monochrome) CRT, there is a single electron gun in the neck and the funnel is coated on the inside with aluminum that has been applied by evaporation; the aluminum is evaporated in a vacuum and allowed to condense on the inside of the CRT. Aluminum eliminates the need for ion traps, necessary to prevent ion burn on the phosphor, while also reflecting light generated by the phosphor towards the screen, managing heat and absorbing electrons providing a return path for them; previously funnels were coated on the inside with aquadag, used because it can be applied like paint; the phosphors were left uncoated. Aluminum started being applied to CRTs in the 1950s, coating the inside of the CRT including the phosphors, which also increased image brightness since the aluminum reflected light (that would otherwise be lost inside the CRT) towards the outside of the CRT. In aluminized monochrome CRTs, Aquadag is used on the outside. There is a single aluminum coating covering the funnel and the screen.
The screen, funnel and neck are fused together into a single envelope, possibly using lead enamel seals, a hole is made in the funnel onto which the anode cap is installed and the phosphor, aquadag and aluminum are applied afterwards. Previously monochrome CRTs used ion traps that required magnets; the magnet was used to deflect the electrons away from the more difficult to deflect ions, letting the electrons through while letting the ions collide into a sheet of metal inside the electron gun. Ion burn results in premature wear of the phosphor. Since ions are harder to deflect than electrons, ion burn leaves a black dot in the center of the screen.
The interior aquadag or aluminum coating was the anode and served to accelerate the electrons towards the screen, collect them after hitting the screen while serving as a capacitor together with the outer aquadag coating. The screen has a single uniform phosphor coating and no shadow mask, technically having no resolution limit.
Monochrome CRTs may use ring magnets to adjust the centering of the electron beam and magnets around the deflection yoke to adjust the geometry of the image.
Color CRTs
Color CRTs use three different phosphors which emit red, green, and blue light respectively. They are packed together in stripes (as in aperture grille designs) or clusters called "triads" (as in shadow mask CRTs).
Color CRTs have three electron guns, one for each primary color, (red, green and blue) arranged either in a straight line (in-line) or in an equilateral triangular configuration (the guns are usually constructed as a single unit). (The triangular configuration is often called "delta-gun", based on its relation to the shape of the Greek letter delta Δ.) The arrangement of the phosphors is the same as that of the electron guns. A grille or mask absorbs the electrons that would otherwise hit the wrong phosphor.
A shadow mask tube uses a metal plate with tiny holes, typically in a delta configuration, placed so that the electron beam only illuminates the correct phosphors on the face of the tube; blocking all other electrons. Shadow masks that use slots instead of holes are known as slot masks. The holes or slots are tapered so that the electrons that strike the inside of any hole will be reflected back, if they are not absorbed (e.g. due to local charge accumulation), instead of bouncing through the hole to strike a random (wrong) spot on the screen. Another type of color CRT (Trinitron) uses an aperture grille of tensioned vertical wires to achieve the same result. The shadow mask has a single hole for each triad. The shadow mask is usually 1/2 inch behind the screen.
Trinitron CRTs were different from other color CRTs in that they had a single electron gun with three cathodes, an aperture grille which lets more electrons through, increasing image brightness (since the aperture grille does not block as many electrons), and a vertically cylindrical screen, rather than a curved screen.
The three electron guns are in the neck (except for Trinitrons) and the red, green and blue phosphors on the screen may be separated by a black grid or matrix (called black stripe by Toshiba).
The funnel is coated with aquadag on both sides while the screen has a separate aluminum coating applied in a vacuum. The aluminum coating protects the phosphor from ions, absorbs secondary electrons, providing them with a return path, preventing them from electrostatically charging the screen which would then repel electrons and reduce image brightness, reflects the light from the phosphors forwards and helps manage heat. It also serves as the anode of the CRT together with the inner aquadag coating. The inner coating is electrically connected to an electrode of the electron gun using springs, forming the final anode. The outer aquadag coating is connected to ground, possibly using a series of springs or a harness that makes contact with the aquadag.
Shadow mask
The shadow mask absorbs or reflects electrons that would otherwise strike the wrong phosphor dots, causing color purity issues (discoloration of images); in other words, when set up correctly, the shadow mask helps ensure color purity. When the electrons strike the shadow mask, they release their energy as heat and x-rays. If the electrons have too much energy due to an anode voltage that is too high for example, the shadow mask can warp due to the heat, which can also happen during the Lehr baking at approx. 435 °C of the frit seal between the faceplate and the funnel of the CRT.
Shadow masks were replaced in TVs by slot masks in the 1970s, since slot masks let more electrons through, increasing image brightness. Shadow masks may be connected electrically to the anode of the CRT. Trinitron used a single electron gun with three cathodes instead of three complete guns. CRT PC monitors usually use shadow masks, except for Sony's Trinitron, Mitsubishi's Diamondtron and NEC's Cromaclear; Trinitron and Diamondtron use aperture grilles while Cromaclear uses a slot mask. Some shadow mask CRTs have color phosphors that are smaller in diameter than the electron beams used to light them, with the intention being to cover the entire phosphor, increasing image brightness. Shadow masks may be pressed into a curved shape.
Screen manufacture
Early color CRTs did not have a black matrix, which was introduced by Zenith in 1969, and Panasonic in 1970. The black matrix eliminates light leaking from one phosphor to another since the black matrix isolates the phosphor dots from one another, so part of the electron beam touches the black matrix. This is also made necessary by warping of the shadow mask. Light bleeding may still occur due to stray electrons stricking wrong phosphor dots. At high resolutions and refresh rates, phosphors only receive a very small amount of energy, limiting image brightness.
Several methods were used to create the black matrix. One method coated the screen in photoresist such as dichromate-sensitized polyvinyl alcohol photoresist which was then dried and exposed; the unexposed areas were removed and the entire screen was coated in colloidal graphite to create a carbon film, and then hydrogen peroxide was used to remove the remaining photoresist alongside the carbon that was on top of it, creating holes that in turn created the black matrix. The photoresist had to be of the correct thickness to ensure sufficient adhesion to the screen, while the exposure step had to be controlled to avoid holes that were too small or large with ragged edges caused by light diffraction, ultimately limiting the maximum resolution of large color CRTs. The holes were then filled with phosphor using the method described above. Another method used phosphors suspended in an aromatic diazonium salt that adhered to the screen when exposed to light; the phosphors were applied, then exposed to cause them to adhere to the screen, repeating the process once for each color. Then carbon was applied to the remaining areas of the screen while exposing the entire screen to light to create the black matrix, and a fixing process using an aqueous polymer solution was applied to the screen to make the phosphors and black matrix resistant to water. Black chromium may be used instead of carbon in the black matrix. Other methods were also used.
The phosphors are applied using photolithography. The inner side of the screen is coated with phosphor particles suspended in PVA photoresist slurry, which is then dried using infrared light, exposed, and developed. The exposure is done using a "lighthouse" that uses an ultraviolet light source with a corrector lens to allow the CRT to achieve color purity. Removable shadow masks with spring-loaded clips are used as photomasks. The process is repeated with all colors. Usually the green phosphor is the first to be applied. After phosphor application, the screen is baked to eliminate any organic chemicals (such as the PVA that was used to deposit the phosphor) that may remain on the screen. Alternatively, the phosphors may be applied in a vacuum chamber by evaporating them and allowing them to condense on the screen, creating a very uniform coating. Early color CRTs had their phosphors deposited using silkscreen printing. Phosphors may have color filters over them (facing the viewer), contain pigment of the color emitted by the phosphor, or be encapsulated in color filters to improve color purity and reproduction while reducing glare. Poor exposure due to insufficient light leads to poor phosphor adhesion to the screen, which limits the maximum resolution of a CRT, as the smaller phosphor dots required for higher resolutions cannot receive as much light due to their smaller size.
After the screen is coated with phosphor and aluminum and the shadow mask installed onto it the screen is bonded to the funnel using a glass frit that may contain 65 to 88% of lead oxide by weight. The lead oxide is necessary for the glass frit to have a low melting temperature. Boron oxide (III) may also present to stabilize the frit, with alumina powder as filler powder to control the thermal expansion of the frit. The frit may be applied as a paste consisting of frit particles suspended in amyl acetate or in a polymer with an alkyl methacrylate monomer together with an organic solvent to dissolve the polymer and monomer. The CRT is then baked in an oven in what is called a Lehr bake, to cure the frit, sealing the funnel and screen together. The frit contains a large quantity of lead, causing color CRTs to contain more lead than their monochrome counterparts. Monochrome CRTs on the other hand do not require frit; the funnel can be fused directly to the glass by melting and joining the edges of the funnel and screen using gas flames. Frit is used in color CRTs to prevent deformation of the shadow mask and screen during the fusing process. The edges of the screen and funnel of the CRT are never melted. A primer may be applied on the edges of the funnel and screen before the frit paste is applied to improve adhesion. The Lehr bake consists of several successive steps that heat and then cool the CRT gradually until it reaches a temperature of 435 to 475 °C (other sources may state different temperatures, such as 440 °C) After the Lehr bake, the CRT is flushed with air or nitrogen to remove contaminants, the electron gun is inserted and sealed into the neck of the CRT, and a vacuum is formed on the CRT.
Convergence and purity in color CRTs
Due to limitations in the dimensional precision with which CRTs can be manufactured economically, it has not been practically possible to build color CRTs in which three electron beams could be aligned to hit phosphors of respective color in acceptable coordination, solely on the basis of the geometric configuration of the electron gun axes and gun aperture positions, shadow mask apertures, etc. The shadow mask ensures that one beam will only hit spots of certain colors of phosphors, but minute variations in physical alignment of the internal parts among individual CRTs will cause variations in the exact alignment of the beams through the shadow mask, allowing some electrons from, for example, the red beam to hit, say, blue phosphors, unless some individual compensation is made for the variance among individual tubes.
Color convergence and color purity are two aspects of this single problem. Firstly, for correct color rendering it is necessary that regardless of where the beams are deflected on the screen, all three hit the same spot (and nominally pass through the same hole or slot) on the shadow mask. This is called convergence. More specifically, the convergence at the center of the screen (with no deflection field applied by the yoke) is called static convergence, and the convergence over the rest of the screen area (specially at the edges and corners) is called dynamic convergence. The beams may converge at the center of the screen and yet stray from each other as they are deflected toward the edges; such a CRT would be said to have good static convergence but poor dynamic convergence. Secondly, each beam must only strike the phosphors of the color it is intended to strike and no others. This is called purity. Like convergence, there is static purity and dynamic purity, with the same meanings of "static" and "dynamic" as for convergence. Convergence and purity are distinct parameters; a CRT could have good purity but poor convergence, or vice versa. Poor convergence causes color "shadows" or "ghosts" along displayed edges and contours, as if the image on the screen were intaglio printed with poor registration. Poor purity causes objects on the screen to appear off-color while their edges remain sharp. Purity and convergence problems can occur at the same time, in the same or different areas of the screen or both over the whole screen, and either uniformly or to greater or lesser degrees over different parts of the screen.
The solution to the static convergence and purity problems is a set of color alignment ring magnets installed around the neck of the CRT. These movable weak permanent magnets are usually mounted on the back end of the deflection yoke assembly and are set at the factory to compensate for any static purity and convergence errors that are intrinsic to the unadjusted tube. Typically there are two or three pairs of two magnets in the form of rings made of plastic impregnated with a magnetic material, with their magnetic fields parallel to the planes of the magnets, which are perpendicular to the electron gun axes. Often, one ring has two poles, another has 4, and the remaining ring has 6 poles. Each pair of magnetic rings forms a single effective magnet whose field vector can be fully and freely adjusted (in both direction and magnitude). By rotating a pair of magnets relative to each other, their relative field alignment can be varied, adjusting the effective field strength of the pair. (As they rotate relative to each other, each magnet's field can be considered to have two opposing components at right angles, and these four components [two each for two magnets] form two pairs, one pair reinforcing each other and the other pair opposing and canceling each other. Rotating away from alignment, the magnets' mutually reinforcing field components decrease as they are traded for increasing opposed, mutually cancelling components.) By rotating a pair of magnets together, preserving the relative angle between them, the direction of their collective magnetic field can be varied. Overall, adjusting all of the convergence/purity magnets allows a finely tuned slight electron beam deflection or lateral offset to be applied, which compensates for minor static convergence and purity errors intrinsic to the uncalibrated tube. Once set, these magnets are usually glued in place, but normally they can be freed and readjusted in the field (e.g. by a TV repair shop) if necessary.
On some CRTs, additional fixed adjustable magnets are added for dynamic convergence or dynamic purity at specific points on the screen, typically near the corners or edges. Further adjustment of dynamic convergence and purity typically cannot be done passively, but requires active compensation circuits, one to correct convergence horizontally and another to correct it vertically. The deflection yoke contains convergence coils, a set of two per color, wound on the same core, to which the convergence signals are applied. That means 6 convergence coils in groups of 3, with 2 coils per group, with one coil for horizontal convergence correction and another for vertical convergence correction, with each group sharing a core. The groups are separated 120° from one another. Dynamic convergence is necessary because the front of the CRT and the shadow mask aren't spherical, compensating for electron beam defocusing and astigmatism. The fact that the CRT screen isn't spherical leads to geometry problems which may be corrected using a circuit. The signals used for convergence are parabolic waveforms derived from three signals coming from a vertical output circuit. The parabolic signal is fed into the convergence coils, while the other two are sawtooth signals that, when mixed with the parabolic signals, create the necessary signal for convergence. A resistor and diode are used to lock the convergence signal to the center of the screen to prevent it from being affected by the static convergence. The horizontal and vertical convergence circuits are similar. Each circuit has two resonators, one usually tuned to 15,625 Hz and the other to 31,250 Hz, which set the frequency of the signal sent to the convergence coils. Dynamic convergence may be accomplished using electrostatic quadrupole fields in the electron gun. Dynamic convergence means that the electron beam does not travel in a perfectly straight line between the deflection coils and the screen, since the convergence coils cause it to become curved to conform to the screen.
The convergence signal may instead be a sawtooth signal with a slight sine wave appearance, the sine wave part is created using a capacitor in series with each deflection coil. In this case, the convergence signal is used to drive the deflection coils. The sine wave part of the signal causes the electron beam to move more slowly near the edges of the screen. The capacitors used to create the convergence signal are known as the s-capacitors. This type of convergence is necessary due to the high deflection angles and flat screens of many CRT computer monitors. The value of the s-capacitors must be chosen based on the scan rate of the CRT, so multi-syncing monitors must have different sets of s-capacitors, one for each refresh rate.
Dynamic convergence may instead be accomplished in some CRTs using only the ring magnets, magnets glued to the CRT, and by varying the position of the deflection yoke, whose position may be maintained using set screws, a clamp and rubber wedges. 90° deflection angle CRTs may use "self-convergence" without dynamic convergence, which together with the in-line triad arrangement, eliminates the need for separate convergence coils and related circuitry, reducing costs. complexity and CRT depth by 10 millimeters. Self-convergence works by means of "nonuniform" magnetic fields. Dynamic convergence is necessary in 110° deflection angle CRTs, and quadrupole windings on the deflection yoke at a certain frequency may also be used for dynamic convergence.
Dynamic color convergence and purity are one of the main reasons why until late in their history, CRTs were long-necked (deep) and had biaxially curved faces; these geometric design characteristics are necessary for intrinsic passive dynamic color convergence and purity. Only starting around the 1990s did sophisticated active dynamic convergence compensation circuits become available that made short-necked and flat-faced CRTs workable. These active compensation circuits use the deflection yoke to finely adjust beam deflection according to the beam target location. The same techniques (and major circuit components) also make possible the adjustment of display image rotation, skew, and other complex raster geometry parameters through electronics under user control.
The guns are aligned with one another (converged) using convergence rings placed right outside the neck; there is one ring per gun. The rings have north and south poles. There are 4 sets of rings, one to adjust RGB convergence, a second to adjust Red and Blue convergence, a third to adjust vertical raster shift, and a fourth to adjust purity. The vertical raster shift adjusts the straightness of the scan line. CRTs may also employ dynamic convergence circuits, which ensure correct convergence at the edges of the CRT. Permalloy magnets may also be used to correct the convergence at the edges. Convergence is carried out with the help of a crosshatch (grid) pattern. Other CRTs may instead use magnets that are pushed in and out instead of rings. In early color CRTs, the holes in the shadow mask became progressively smaller as they extended outwards from the center of the screen, to aid in convergence.
Magnetic shielding and degaussing
If the shadow mask or aperture grille becomes magnetized, its magnetic field alters the paths of the electron beams. This causes errors of "color purity" as the electrons no longer follow only their intended paths, and some will hit some phosphors of colors other than the one intended. For example, some electrons from the red beam may hit blue or green phosphors, imposing a magenta or yellow tint to parts of the image that are supposed to be pure red. (This effect is localized to a specific area of the screen if the magnetization is localized.) Therefore, it is important that the shadow mask or aperture grille not be magnetized. The earth's magnetic field may have an effect on the color purity of the CRT. Because of this, some CRTs have external magnetic shields over their funnels. The magnetic shield may be made of soft iron or mild steel and contain a degaussing coil. The magnetic shield and shadow mask may be permanently magnetized by the earth's magnetic field, adversely affecting color purity when the CRT is moved. This problem is solved with a built-in degaussing coil, found in many TVs and computer monitors. Degaussing may be automatic, occurring whenever the CRT is turned on. The magnetic shield may also be internal, being on the inside of the funnel of the CRT.
Most color CRT displays (those used in television sets and computer monitors) have a built-in degaussing (demagnetizing) circuit, the primary component of which is a degaussing coil which is mounted around the perimeter of the CRT face inside the bezel. Upon power-up of the CRT display, the degaussing circuit produces a brief, alternating current through the degaussing coil which smoothly decays in strength (fades out) to zero over a period of a few seconds, producing a decaying alternating magnetic field from the coil. This degaussing field is strong enough to remove shadow mask magnetization in most cases, maintaining color purity. In unusual cases of strong magnetization where the internal degaussing field is not sufficient, the shadow mask may be degaussed externally with a stronger portable degausser or demagnetizer. However, an excessively strong magnetic field, whether alternating or constant, may mechanically deform (bend) the shadow mask, causing a permanent color distortion on the display which looks very similar to a magnetization effect.
The degaussing circuit is often built of a thermo-electric (not electronic) device containing a small ceramic heating element and a positive thermal coefficient (PTC) resistor, connected directly to the switched AC power line with the resistor in series with the degaussing coil. When the power is switched on, the heating element heats the PTC resistor, increasing its resistance to a point where degaussing current is minimal, but not actually zero. In older CRT displays, this low-level current (which produces no significant degaussing field) is sustained along with the action of the heating element as long as the display remains switched on. To repeat a degaussing cycle, the CRT display must be switched off and left off for at least several seconds to reset the degaussing circuit by allowing the PTC resistor to cool to the ambient temperature; switching the display-off and immediately back on will result in a weak degaussing cycle or effectively no degaussing cycle.
This simple design is effective and cheap to build, but it wastes some power continuously. Later models, especially Energy Star rated ones, use a relay to switch the entire degaussing circuit on and off, so that the degaussing circuit uses energy only when it is functionally active and needed. The relay design also enables degaussing on user demand through the unit's front panel controls, without switching the unit off and on again. This relay can often be heard clicking off at the end of the degaussing cycle a few seconds after the monitor is turned on, and on and off during a manually initiated degaussing cycle.
Resolution
Dot pitch defines the maximum resolution of the display, assuming delta-gun CRTs. In these, as the scanned resolution approaches the dot pitch resolution, moiré appears, as the detail being displayed is finer than what the shadow mask can render. Aperture grille monitors do not suffer from vertical moiré, however, because their phosphor stripes have no vertical detail. In smaller CRTs, these strips maintain position by themselves, but larger aperture-grille CRTs require one or two crosswise (horizontal) support strips; one for smaller CRTs, and two for larger ones. The support wires block electrons, causing the wires to be visible. In aperture grille CRTs, dot pitch is replaced by stripe pitch. Hitachi developed the Enhanced Dot Pitch (EDP) shadow mask, which uses oval holes instead of circular ones, with respective oval phosphor dots. Moiré is reduced in shadow mask CRTs by arranging the holes in the shadow mask in a honeycomb-like pattern.
Projection CRTs
Projection CRTs were used in CRT projectors and CRT rear-projection televisions, and are usually small (being 7 to 9 inches across); have a phosphor that generates either red, green or blue light, thus making them monochrome CRTs; and are similar in construction to other monochrome CRTs. Larger projection CRTs in general lasted longer, and were able to provide higher brightness levels and resolution, but were also more expensive. Projection CRTs have an unusually high anode voltage for their size (such as 27 or 25 kV for a 5 or 7-inch projection CRT respectively), and a specially made tungsten/barium cathode (instead of the pure barium oxide normally used) that consists of barium atoms embedded in 20% porous tungsten or barium and calcium aluminates or of barium, calcium and aluminum oxides coated on porous tungsten; the barium diffuses through the tungsten to emit electrons. The special cathode can deliver 2mA of current instead of the 0.3mA of normal cathodes, which makes them bright enough to be used as light sources for projection. The high anode voltage and the specially made cathode increase the voltage and current, respectively, of the electron beam, which increases the light emitted by the phosphors, and also the amount of heat generated during operation; this means that projector CRTs need cooling. The screen is usually cooled using a container (the screen forms part of the container) with glycol; the glycol may itself be dyed, or colorless glycol may be used inside a container which may be colored (forming a lens known as a c-element). Colored lenses or glycol are used for improving color reproduction at the cost of brightness, and are only used on red and green CRTs. Each CRT has its own glycol, which has access to an air bubble to allow the glycol to shrink and expand as it cools and warms. Projector CRTs may have adjustment rings just like color CRTs to adjust astigmatism, which is flaring of the electron beam (stray light similar to shadows). They have three adjustment rings; one with two poles, one with four poles, and another with 6 poles. When correctly adjusted, the projector can display perfectly round dots without flaring. The screens used in projection CRTs were more transparent than usual, with 90% transmittance. The first projection CRTs were made in 1933.
Projector CRTs were available with electrostatic and electromagnetic focusing, the latter being more expensive. Electrostatic focusing used electronics to focus the electron beam, together with focusing magnets around the neck of the CRT for fine focusing adjustments. This type of focusing degraded over time. Electromagnetic focusing was introduced in the early 1990s and included an electromagnetic focusing coil in addition to the already existing focusing magnets. Electromagnetic focusing was much more stable over the lifetime of the CRT, retaining 95% of its sharpness by the end of life of the CRT.
Beam-index tube
Beam-index tubes, also known as Uniray, Apple CRT or Indextron, was an attempt in the 1950s by Philco to create a color CRT without a shadow mask, eliminating convergence and purity problems, and allowing for shallower CRTs with higher deflection angles. It also required a lower voltage power supply for the final anode since it didn't use a shadow mask, which normally blocks around 80% of the electrons generated by the electron gun. The lack of a shadow mask also made it immune to the earth's magnetic field while also making degaussing unnecessary and increasing image brightness. It was constructed similarly to a monochrome CRT, with an aquadag outer coating, an aluminum inner coating, and a single electron gun but with a screen with an alternating pattern of red, green, blue and UV (index) phosphor stripes (similarly to a Trinitron) with a side mounted photomultiplier tube or photodiode pointed towards the rear of the screen and mounted on the funnel of CRT, to track the electron beam to activate the phosphors separately from one another using the same electron beam. Only the index phosphor stripe was used for tracking, and it was the only phosphor that wasn't covered by an aluminum layer. It was shelved because of the precision required to produce it. It was revived by Sony in the 1980s as the Indextron but its adoption was limited, at least in part due to the development of LCD displays. Beam-index CRTs also suffered from poor contrast ratios of only around 50:1 since some light emission by the phosphors was required at all times by the photodiodes to track the electron beam. It allowed for single CRT color CRT projectors due to a lack of shadow mask; normally CRT projectors use three CRTs, one for each color, since a lot of heat is generated due to the high anode voltage and beam current, making a shadow mask impractical and inefficient since it would warp under the heat produced (shadow masks absorb most of the electron beam, and, hence, most of the energy carried by the relativistic electrons); the three CRTs meant that an involved calibration and adjustment procedure had to be carried out during installation of the projector, and moving the projector would require it to be recalibrated. A single CRT meant the need for calibration was eliminated, but brightness was decreased since the CRT screen had to be used for three colors instead of each color having its own CRT screen. A stripe pattern also imposes a horizontal resolution limit; in contrast, three-screen CRT projectors have no theoretical resolution limit, due to them having single, uniform phosphor coatings.
Flat CRTs
Flat CRTs are those with a flat screen. Despite having a flat screen, they may not be completely flat, especially on the inside, instead having a greatly increased curvature. A notable exception is the LG Flatron (made by LG.Philips Displays, later LP displays) which is truly flat on the outside and inside but has a bonded glass pane on the screen with a tensioned rim band to provide implosion protection. Such completely flat CRTs were first introduced by Zenith th 1986, and use
flat tensioned shadow masks, where the shadow mask is held under tension, providing increased resistance to blooming.
Flat CRTs have a number of challenges, like deflection. Vertical deflection boosters are required to increase the amount of current that is sent to the vertical deflection coils to compensate for the reduced curvature. The CRTs used in the Sinclair TV80 and in many Sony Watchmans were flat in that they were not deep and their front screens were flat, but their electron guns were put to a side of the screen. The TV80 used electrostatic deflection while the Watchman used magnetic deflection with a phosphor screen that was curved inwards. Similar CRTs were used in video door bells.
Radar CRTs
Radar CRTs such as the 7JP4 had a circular screen and scanned the beam from the center outwards. The screen often had two colors, often a bright short persistence color that only appeared as the beam scanned the display and a long persistence phosphor afterglow. When the beam strikes the phosphor, the phosphor brightly illuminates, and when the beam leaves, the dimmer long persistence afterglow would remain lit where the beam struck the phosphor, alongside the radar targets that were "written" by the beam, until the beam re-struck the phosphor. The deflection yoke rotated, causing the beam to rotate in a circular fashion.
Oscilloscope CRTs
In oscilloscope CRTs, electrostatic deflection is used, rather than the magnetic deflection commonly used with television and other large CRTs. The beam is deflected horizontally by applying an electric field between a pair of plates to its left and right, and vertically by applying an electric field to plates above and below. Televisions use magnetic rather than electrostatic deflection because the deflection plates obstruct the beam when the deflection angle is as large as is required for tubes that are relatively short for their size. Some Oscilloscope CRTs incorporate post deflection anodes (PDAs) that are spiral-shaped to ensure even anode potential across the CRT and operate at up to 15,000 volts. In PDA CRTs the electron beam is deflected before it is accelerated, improving sensitivity and legibility, specially when analyzing voltage pulses with short duty cycles.
Microchannel plate
When displaying fast one-shot events, the electron beam must deflect very quickly, with few electrons impinging on the screen, leading to a faint or invisible image on the display. Oscilloscope CRTs designed for very fast signals can give a brighter display by passing the electron beam through a micro-channel plate just before it reaches the screen. Through the phenomenon of secondary emission, this plate multiplies the number of electrons reaching the phosphor screen, giving a significant improvement in writing rate (brightness) and improved sensitivity and spot size as well.
Graticules
Most oscilloscopes have a graticule as part of the visual display, to facilitate measurements. The graticule may be permanently marked inside the face of the CRT, or it may be a transparent external plate made of glass or acrylic plastic. An internal graticule eliminates parallax error, but cannot be changed to accommodate different types of measurements. Oscilloscopes commonly provide a means for the graticule to be illuminated from the side, which improves its visibility.
Image storage tubes
These are found in analog phosphor storage oscilloscopes. These are distinct from digital storage oscilloscopes which rely on solid state digital memory to store the image.
Where a single brief event is monitored by an oscilloscope, such an event will be displayed by a conventional tube only while it actually occurs. The use of a long persistence phosphor may allow the image to be observed after the event, but only for a few seconds at best. This limitation can be overcome by the use of a direct view storage cathode-ray tube (storage tube). A storage tube will continue to display the event after it has occurred until such time as it is erased. A storage tube is similar to a conventional tube except that it is equipped with a metal grid coated with a dielectric layer located immediately behind the phosphor screen. An externally applied voltage to the mesh initially ensures that the whole mesh is at a constant potential. This mesh is constantly exposed to a low velocity electron beam from a 'flood gun' which operates independently of the main gun. This flood gun is not deflected like the main gun but constantly 'illuminates' the whole of the storage mesh. The initial charge on the storage mesh is such as to repel the electrons from the flood gun which are prevented from striking the phosphor screen.
When the main electron gun writes an image to the screen, the energy in the main beam is sufficient to create a 'potential relief' on the storage mesh. The areas where this relief is created no longer repel the electrons from the flood gun which now pass through the mesh and illuminate the phosphor screen. Consequently, the image that was briefly traced out by the main gun continues to be displayed after it has occurred. The image can be 'erased' by resupplying the external voltage to the mesh restoring its constant potential. The time for which the image can be displayed was limited because, in practice, the flood gun slowly neutralises the charge on the storage mesh. One way of allowing the image to be retained for longer is temporarily to turn off the flood gun. It is then possible for the image to be retained for several days. The majority of storage tubes allow for a lower voltage to be applied to the storage mesh which slowly restores the initial charge state. By varying this voltage a variable persistence is obtained. Turning off the flood gun and the voltage supply to the storage mesh allows such a tube to operate as a conventional oscilloscope tube.
Vector monitors
Vector monitors were used in early computer aided design systems and are in some late-1970s to mid-1980s arcade games such as Asteroids.
They draw graphics point-to-point, rather than scanning a raster. Either monochrome or color CRTs can be used in vector displays, and the essential principles of CRT design and operation are the same for either type of display; the main difference is in the beam deflection patterns and circuits.
Data storage tubes
The Williams tube or Williams-Kilburn tube was a cathode-ray tube used to electronically store binary data. It was used in computers of the 1940s as a random-access digital storage device. In contrast to other CRTs in this article, the Williams tube was not a display device, and in fact could not be viewed since a metal plate covered its screen.
Cat's eye
In some vacuum tube radio sets, a "Magic Eye" or "Tuning Eye" tube was provided to assist in tuning the receiver. Tuning would be adjusted until the width of a radial shadow was minimized. This was used instead of a more expensive electromechanical meter, which later came to be used on higher-end tuners when transistor sets lacked the high voltage required to drive the device. The same type of device was used with tape recorders as a recording level meter, and for various other applications including electrical test equipment.
Charactrons
Some displays for early computers (those that needed to display more text than was practical using vectors, or that required high speed for photographic output) used Charactron CRTs. These incorporate a perforated metal character mask (stencil), which shapes a wide electron beam to form a character on the screen. The system selects a character on the mask using one set of deflection circuits, but that causes the extruded beam to be aimed off-axis, so a second set of deflection plates has to re-aim the beam so it is headed toward the center of the screen. A third set of plates places the character wherever required. The beam is unblanked (turned on) briefly to draw the character at that position. Graphics could be drawn by selecting the position on the mask corresponding to the code for a space (in practice, they were simply not drawn), which had a small round hole in the center; this effectively disabled the character mask, and the system reverted to regular vector behavior. Charactrons had exceptionally long necks, because of the need for three deflection systems.
Nimo
Nimo was the trademark of a family of small specialised CRTs manufactured by Industrial Electronics Engineers. These had 10 electron guns which produced electron beams in the form of digits in a manner similar to that of the charactron. The tubes were either simple single-digit displays or more complex 4- or 6- digit displays produced by means of a suitable magnetic deflection system. Having little of the complexities of a standard CRT, the tube required a relatively simple driving circuit, and as the image was projected on the glass face, it provided a much wider viewing angle than competitive types (e.g., nixie tubes). However, their requirement for several voltages and their high voltage made them uncommon.
Flood-beam CRT
Flood-beam CRTs are small tubes that are arranged as pixels for large video walls like Jumbotrons. The first screen using this technology (called Diamond Vision by Mitsubishi Electric) was introduced by Mitsubishi Electric for the 1980 Major League Baseball All-Star Game. It differs from a normal CRT in that the electron gun within does not produce a focused controllable beam. Instead, electrons are sprayed in a wide cone across the entire front of the phosphor screen, basically making each unit act as a single light bulb. Each one is coated with a red, green or blue phosphor, to make up the color sub-pixels. This technology has largely been replaced with light-emitting diode displays. Unfocused and undeflected CRTs were used as grid-controlled stroboscope lamps since 1958. Electron-stimulated luminescence (ESL) lamps, which use the same operating principle, were released in 2011.
Print-head CRT
CRTs with an unphosphored front glass but with fine wires embedded in it were used as electrostatic print heads in the 1960s. The wires would pass the electron beam current through the glass onto a sheet of paper where the desired content was therefore deposited as an electrical charge pattern. The paper was then passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image.
Zeus – thin CRT display
In the late 1990s and early 2000s Philips Research Laboratories experimented with a type of thin CRT known as the Zeus display, which contained CRT-like functionality in a flat-panel display. The devices were demonstrated but never marketed.
Slimmer CRT
Some CRT manufacturers, both LG.Philips Displays (later LP Displays) and Samsung SDI, innovated CRT technology by creating a slimmer tube. Slimmer CRT had the trade names Superslim, Ultraslim, Vixlim (by Samsung) and Cybertube and Cybertube+ (both by LG Philips displays). A flat CRT has a depth. The depth of Superslim was and Ultraslim was .
Health concerns
Ionizing radiation
CRTs can emit a small amount of X-ray radiation; this is a result of the electron beam's bombardment of the shadow mask/aperture grille and phosphors, which produces bremsstrahlung (braking radiation) as the high-energy electrons are decelerated. The amount of radiation escaping the front of the monitor is widely considered not to be harmful. The Food and Drug Administration regulations in are used to strictly limit, for instance, television receivers to 0.5 milliroentgens per hour (mR/h) (0.13 µC/(kg·h) or 36 pA/kg) at a distance of from any external surface; since 2007, most CRTs have emissions that fall well below this limit.
The density of the x-rays that would be generated by a CRT is low because the raster scan of a typical CRT distributes the energy of the electron beam across the entire screen. Voltages above 15,000 volts are enough to generate "soft" x-rays. However, since CRTs may stay on for several hours at a time, the amount of x-rays generated by the CRT may become significant, hence the importance of using materials to shield against x-rays, such as the thick leaded glass and barium-strontium glass used in CRTs.
Concerns about x-rays emitted by CRTs began in 1967 when it was found that TV sets made by General Electric were emitting “X-radiation in excess of desirable levels”. It was later found that TV sets from all manufacturers were also emitting radiation. This caused television industry representatives to be brought before a U.S. congressional committee, which later proposed a federal radiation regulation bill, which became the 1968 Radiation Control for Health and Safety Act. It was recommended to TV set owners to always be at a distance of at least 6 feet from the screen of the TV set, and to avoid "prolonged exposure" at the sides, rear or underneath a TV set. It was discovered that most of the radiation was directed downwards. Owners were also told to not modify their set's internals to avoid exposure to radiation. Headlines about "radioactive" TV sets continued until the end of the 1960s. There once was a proposal by two New York congressmen that would have forced TV set manufacturers to “go into homes to test all of the nation's 15 million color sets and to install radiation devices in them”. The FDA eventually began regulating radiation emissions from all electronic products in the US.
Toxicity
Older color and monochrome CRTs may have been manufactured with toxic substances, such as cadmium, in the phosphors. The rear glass tube of modern CRTs may be made from leaded glass, which represent an environmental hazard if disposed of improperly. Since 1970, glass in the front panel (the viewable portion of the CRT) used strontium oxide rather than lead, though the rear of the CRT was still produced from leaded glass. Monochrome CRTs typically do not contain enough leaded glass to fail EPA TCLP tests. While the TCLP process grinds the glass into fine particles in order to expose them to weak acids to test for leachate, intact CRT glass does not leach (The lead is vitrified, contained inside the glass itself, similar to leaded glass crystalware).
Flicker
At low refresh rates (60 Hz and below), the periodic scanning of the display may produce a flicker that some people perceive more easily than others, especially when viewed with peripheral vision. Flicker is commonly associated with CRT as most televisions run at 50 Hz (PAL) or 60 Hz (NTSC), although there are some 100 Hz PAL televisions that are flicker-free. Typically only low-end monitors run at such low frequencies, with most computer monitors supporting at least 75 Hz and high-end monitors capable of 100 Hz or more to eliminate any perception of flicker. Though the 100 Hz PAL was often achieved using interleaved scanning, dividing the circuit and scan into two beams of 50 Hz. Non-computer CRTs or CRT for sonar or radar may have long persistence phosphor and are thus flicker free. If the persistence is too long on a video display, moving images will be blurred.
High-frequency audible noise
50 Hz/60 Hz CRTs used for television operate with horizontal scanning frequencies of 15,734 Hz (for NTSC systems) or 15,625 Hz (for PAL systems). These frequencies are at the upper range of human hearing and are inaudible to many people; however, some people (especially children) will perceive a high-pitched tone near an operating CRT television. The sound is due to magnetostriction in the magnetic core and periodic movement of windings of the flyback transformer but the sound can also be created by movement of the deflection coils, yoke or ferrite beads.
This problem does not occur on 100/120 Hz TVs and on non-CGA (Color Graphics Adapter) computer displays, because they use much higher horizontal scanning frequencies that produce sound which is inaudible to humans (22 kHz to over 100 kHz).
Implosion
High vacuum inside glass-walled cathode-ray tubes permits electron beams to fly freely—without colliding into molecules of air or other gas. If the glass is damaged, atmospheric pressure can collapse the vacuum tube into dangerous fragments which accelerate inward and then spray at high speed in all directions. Although modern cathode-ray tubes used in televisions and computer displays have epoxy-bonded face-plates or other measures to prevent shattering of the envelope, CRTs must be handled carefully to avoid personal injury.
Implosion protection
Early CRTs had a glass plate over the screen that was bonded to it using glue, creating a laminated glass screen: initially the glue was polyvinyl acetate (PVA), while later versions such as the LG Flatron used a resin, perhaps a UV-curable resin. The PVA degrades over time creating a "cataract", a ring of degraded glue around the edges of the CRT that does not allow light from the screen to pass through. Later CRTs instead use a tensioned metal rim band mounted around the perimeter that also provides mounting points for the CRT to be mounted to a housing. In a 19-inch CRT, the tensile stress in the rim band is 70 kg/cm2. Older CRTs were mounted to the TV set using a frame. The band is tensioned by heating it, then mounting it on the CRT, the band cools afterwards, shrinking in size which puts the glass under compression, strengthening the glass reducing the necessary thickness (and hence weight) of the glass. This makes the band an integral component that should never be removed from an intact CRT that still has a vacuum; attempting to remove it may cause the CRT to implode. The rim band prevents the CRT from imploding should the screen be broken. The rim band may be glued to the perimeter of the CRT using epoxy, preventing cracks from spreading beyond the screen and into the funnel.
Electric shock
To accelerate the electrons from the cathode to the screen with enough energy to achieve sufficient image brightness, a very high voltage (EHT or extra-high tension) is required, from a few thousand volts for a small oscilloscope CRT to tens of thousands for a larger screen color TV. This is many times greater than household power supply voltage. Even after the power supply is turned off, some associated capacitors and the CRT itself may retain a charge for some time and therefore dissipate that charge suddenly through a ground such as an inattentive human grounding a capacitor discharge lead. An average monochrome CRT may use 1 to 1.5 kV of anode voltage per inch.
Security concerns
Under some circumstances, the signal radiated from the electron guns, scanning circuitry, and associated wiring of a CRT can be captured remotely and used to reconstruct what is shown on the CRT using a process called Van Eck phreaking. Special TEMPEST shielding can mitigate this effect. Such radiation of a potentially exploitable signal, however, occurs also with other display technologies and with electronics in general.
Recycling
Due to the toxins contained in CRT monitors the United States Environmental Protection Agency created rules (in October 2001) stating that CRTs must be brought to special e-waste recycling facilities. In November 2002, the EPA began fining companies that disposed of CRTs through landfills or incineration. Regulatory agencies, local and statewide, monitor the disposal of CRTs and other computer equipment.
As electronic waste, CRTs are considered one of the hardest types to recycle. CRTs have relatively high concentration of lead and phosphors (not phosphorus), both of which are necessary for the display. There are several companies in the United States that charge a small fee to collect CRTs, then subsidize their labor by selling the harvested copper, wire, and printed circuit boards. The United States Environmental Protection Agency (EPA) includes discarded CRT monitors in its category of "hazardous household waste" but considers CRTs that have been set aside for testing to be commodities if they are not discarded, speculatively accumulated, or left unprotected from weather and other damage.
Various states participate in the recycling of CRTs, each with their reporting requirements for collectors and recycling facilities. For example, in California the recycling of CRTs is governed by CALRecycle, the California Department of Resources Recycling and Recovery through their Payment System. Recycling facilities that accept CRT devices from business and residential sector must obtain contact information such as address and phone number to ensure the CRTs come from a California source in order to participate in the CRT Recycling Payment System.
In Europe, disposal of CRT televisions and monitors is covered by the WEEE Directive.
Multiple methods have been proposed for the recycling of CRT glass. The methods involve thermal, mechanical and chemical processes. All proposed methods remove the lead oxide content from the glass. Some companies operated furnaces to separate the lead from the glass. A coalition called the Recytube project was once formed by several European companies to devise a method to recycle CRTs. The phosphors used in CRTs often contain rare earth metals. A CRT contains about 7g of phosphor.
The funnel can be separated from the screen of the CRT using laser cutting, diamond saws or wires or using a resistively heated nichrome wire.
Leaded CRT glass was sold to be remelted into other CRTs, or even broken down and used in road construction or used in tiles, concrete, concrete and cement bricks, fiberglass insulation or used as flux in metals smelting.
A considerable portion of CRT glass is landfilled, where it can pollute the surrounding environment. It is more common for CRT glass to be disposed of than being recycled.
See also
Basics of cathode rays and discharge in low-pressure gas:
Cathode ray
Vacuum tube
Light production by cathode rays:
Cathodoluminescence
Crookes tube
Phosphor
Scintillation (physics)
Manipulating the electron beam:
Blanking (video)
Horizontal blanking interval
Vertical blanking interval
Deflection yoke
Electron-beam processing
Electrostatic deflection
Electrostatic lens
Magnetic deflection
Magnetic lens
Applying CRT in different display-purpose:
Analog television
Image displaying
Comparison of CRT, LCD, plasma, and OLED
Comparison of display technology
Computer monitor
CRT projector
Image dissector
Monochrome monitor
Monoscope
Oscilloscope
Cathode-ray oscilloscope
Overscan
Raster scan
Scan line
Miscellaneous phenomena:
Noise (video)
Historical aspects:
Direct-view bistable storage tube
Flat-panel display
Geer tube
History of display technology
Image dissector
LCD television, LED-backlit LCD, LED display
Penetron
Surface-conduction electron-emitter display
Trinitron
Safety and precautions:
Monitor filter
Photosensitive epilepsy
TCO Certification
References
Selected patents
: Zworykin Television System
External links
Consumer electronics
Display technology
Television technology
Vacuum tube displays
Audiovisual introductions in 1897
Telecommunications-related introductions in 1897 |
826727 | https://en.wikipedia.org/wiki/Windows%20Forms | Windows Forms | Windows Forms (WinForms) is a free and open-source graphical (GUI) class library included as a part of Microsoft .NET, .NET Framework or Mono Framework, providing a platform to write client applications for desktop, laptop, and tablet PCs. While it is seen as a replacement for the earlier and more complex C++ based Microsoft Foundation Class Library, it does not offer a comparable paradigm and only acts as a platform for the user interface tier in a multi-tier solution.
At the Microsoft Connect event on December 4, 2018, Microsoft announced releasing Windows Forms as an open source project on GitHub. It is released under the MIT License. With this release, Windows Forms has become available for projects targeting the .NET Core framework. However, the framework is still available only on the Windows platform, and Mono's incomplete implementation of Windows Forms remains the only cross-platform implementation.
Architecture
A Windows Forms application is an event-driven application supported by Microsoft's .NET Framework. Unlike a batch program, it spends most of its time simply waiting for the user to do something, such as fill in a text box or click a button. The code for the application can be written in a .NET programming language such as C# or Visual Basic.
Windows Forms provides access to native Windows User Interface Common Controls by wrapping the existent Windows API in managed code. With the help of Windows Forms, the .NET Framework provides a more comprehensive abstraction above the Win32 API than Visual Basic or MFC did.
Windows Forms is similar to Microsoft Foundation Class (MFC) library in developing client applications. It provides a wrapper consisting of a set of C++ classes for development of Windows applications. However, it does not provide a default application framework like the MFC. Every control in a Windows Forms application is a concrete instance of a class.
Features
All visual elements in the Windows Forms class library derive from the Control class. This provides the minimal functionality of a user interface element such as location, size, color, font, text, as well as common events like click and drag/drop. The Control class also has docking support to let a control rearrange its position under its parent. The Microsoft Active Accessibility support in the Control class also helps impaired users to use Windows Forms better.
Besides providing access to native Windows controls like button, textbox, checkbox and listview, Windows Forms added its own controls for ActiveX hosting, layout arrangement, validation and rich data binding. Those controls are rendered using GDI+.
History and future
Just like Abstract Window Toolkit (AWT), the equivalent Java API, Windows Forms was an early and easy way to provide graphical user interface components to the .NET Framework. Windows Forms is built on the existing Windows API and some controls merely wrap underlying Windows components. Some of the methods allow direct access to Win32 callbacks, which are not available in non-Windows platforms.
In .NET Framework 2.0, Windows Forms gained richer layout controls, Office 2003 style toolstrip controls, multithreading component, richer design-time and data binding support as well as ClickOnce for web-based deployment.
With the release of .NET 3.0, Microsoft released a second, parallel API for rendering GUIs: Windows Presentation Foundation (WPF) based on DirectX, together with a GUI declarative language called XAML.
During a question-and-answer session at the Build 2014 Conference, Microsoft explained that Windows Forms was under maintenance mode, with no new features being added, but bugs found would still be fixed. Most recently, improved high-DPI support for various Windows Forms controls was introduced in updates to .NET Framework version 4.5.
XAML backwards compatibility with Windows Forms
For future development, Microsoft has succeeded Windows Forms with an XAML-based GUI entry using frameworks such as WPF and UWP. However, drag and drop placement of GUI components in a manner similar to Windows Forms is still provided in XAML by replacing the root XAML element of the Page/Window with a "Canvas" UI-Control. When making this change, the user can build a window in a similar fashion as in Windows Forms by directly dragging and dropping components using the Visual Studio GUI.
While XAML provides drag and drop placement backwards compatibility through the Canvas Control, XAML Controls are only similar to Windows Forms Controls and are not one-to-one backwards compatible. They perform similar functions and have a similar appearance, but the properties and methods are different enough to require remapping from one API to another.
Alternative implementation
Mono is a project led by Xamarin (formerly by Ximian, then Novell) to create an Ecma standard compliant .NET Framework compatible set of tools.
In 2011, Mono's support for System.Windows.Forms as of .NET 2.0 was announced as complete;
System.Windows.Forms 2.0 works natively on Mac OS X.
However, System.Windows.Forms has not been actively developed on Mono.
Full compatibility with .NET was not possible, because Microsoft's System.Windows Forms is mainly a wrapper around the Windows API, and some of the methods allow direct access to Win32 callbacks, which are not available in platforms other than Windows.
A more significant problem is that, since version 5.2,
Mono has been upgraded so that its default is to assume a 64 bit platform.
However, System.Windows.Forms on Mono for the Macintosh OS X platform has been built using a 32 bit subsystem, Carbon.
As of this date, a 64-bit version of System.Windows.Forms for use on Mac OS X remains unavailable and only .NET applications built for the 32 bit platform can be expected to execute.
See also
Microsoft Visual Studio
ClickOnce
Abstract Window Toolkit (AWT), the equivalent GUI application programming interface (API) for the Java programming language
Visual Component Library (VCL) from Borland
Visual Test, test automation
References
External links
MSDN: Building Windows Forms applications
MSDN : Windows.Forms reference documentation
MSDN : Windows Forms Technical Articles - Automating Windows Form with Visual Test
.NET terminology
Formerly proprietary software
Free and open-source software
Forms
Microsoft free software
Mono (software)
Software using the MIT license
Widget toolkits
2002 software |
243402 | https://en.wikipedia.org/wiki/Revolution%20%28software%20platform%29 | Revolution (software platform) | Revolution is a software development environment/multimedia authoring software in the tradition of HyperCard and is based on the MetaCard engine. Its primary focus is on providing a relatively accessible development tool set and scripting language that enable the creation of software programs that run across multiple platforms with little or no code modifications. The Integrated Development Environment (IDE) included with Revolution is built partly on the models created by Bill Atkinson and the original HyperCard team at Apple and subsequently followed by many other software development products, such as Microsoft's Visual Basic. Revolution includes an English language-like scripting language called Transcript, a full programming language superset of the HyperCard's scripting language, HyperTalk.
The higher-grade versions (see Versions, below), allow applications to be compiled to run on more than one platform, including Macintosh (Classic or Mac OS 9, and Mac OS X), Windows and Unix-like systems including Linux. It can also import HyperCard stacks, which require little or no modification unless they use external functions, which generally do not work in Revolution.
Revolution is designed to be an environment where non-programmers feel at ease and programmers feel not too uncomfortable (after getting used to "non-traditional" programming syntax). Like any programming language or development environment, opinions as to the degree to which those aims have been achieved vary greatly.
Versions
Before Revolution 2, the "Starter Kit" version was available. This was freeware and imposed restrictions on the user, such as not allowing scripts longer than ten lines to be compiled. However, this has since been discontinued and is no longer available for download. The "Dreamcard" version is intended for home users/hobbyists. Applications (called "stacks") built using it require either the "Dreamcard Player" or a full copy of Revolution to run because Dreamcard does not include the Revolution compiler. The "Studio" version is more powerful, and is useful in creating professional binary applications. The Enterprise version is probably too expensive for casual users, but when compared to other similar products such as Borland Delphi or Kylix, is priced competitively. If one wishes to develop programs on non-Microsoft platforms for cross-platform deployment, Revolution is one of a small handful of commercially supported options.
Compatibility
Revolution is derived from MetaCard's engine, so MetaCard stacks are 100% compatible with Revolution. However, the other way around is not necessarily true. HyperCard stacks can be run, but externals will only run on Macs. SuperCard stacks must be run through a converter to be upgraded to Revolution/MetaCard format.
Platforms
Revolution runs on Mac Classic, Mac OS X, Windows 9x/NT/2000/XP, and the following UNIX variants:
FreeBSD or BSD/OS
HP-UX 10.20 or later
SGI IRIX 5.3 or later
Linux Intel 1.2.13 ELF or later
AIX 3.2.3 or later
Solaris (2.5 or later for SPARC and x86; 2.3 and 2.4 SPARC only)
SunOS 4.1.x or later
Standalone applications written in Revolution can run on the above, as well as Windows 3.1 (with limitations).
As of March 2005, the Dreamcard Player runs only on Windows, Mac OS (Classic or X), and Linux.
Interface
On Linux, the user's GNOME/Xfce/GTK+ theme will be used if GTK+ is installed, otherwise, a Motif look will be used. On the Mac, Appearance Manager will be used if available, otherwise the Platinum look will be used. On Windows, the XP theme or standard widgets will be used. Users can preview the Motif, Platinum, and Windows appearance on any platform.
Revolution community
There are many companies and groups which use the Revolution engine. It is mainly used by freelance programmers to make small widgets or libraries, but as one example is used exclusively by the Christa McAuliffe Space Education Center.
External links
Web site
Dynamic programming languages
Dynamically typed programming languages
Scripting languages |
1214512 | https://en.wikipedia.org/wiki/Basic%20access%20authentication | Basic access authentication | In the context of an HTTP transaction, basic access authentication is a method for an HTTP user agent (e.g. a web browser) to provide a user name and password when making a request. In basic HTTP authentication, a request contains a header field in the form of Authorization: Basic <credentials>, where credentials is the Base64 encoding of ID and password joined by a single colon :.
It was originally implemented by Ari Luotonen at CERN in 1993
and defined in the HTTP 1.0 specification in 1996.
It is specified in from 2015, which obsoletes from 1999.
Features
HTTP Basic authentication (BA) implementation is the simplest technique for enforcing access controls to web resources because it does not require cookies, session identifiers, or login pages; rather, HTTP Basic authentication uses standard fields in the HTTP header.
Security
The BA mechanism does not provide confidentiality protection for the transmitted credentials. They are merely encoded with Base64 in transit and not encrypted or hashed in any way. Therefore, basic authentication is typically used in conjunction with HTTPS to provide confidentiality.
Because the BA field has to be sent in the header of each HTTP request, the web browser needs to cache credentials for a reasonable period of time to avoid constantly prompting the user for their username and password. Caching policy differs between browsers.
HTTP does not provide a method for a web server to instruct the client to "log out" the user. However, there are a number of methods to clear cached credentials in certain web browsers. One of them is redirecting the user to a URL on the same domain, using credentials that are intentionally incorrect. However, this behavior is inconsistent between various browsers and browser versions. Microsoft Internet Explorer offers a dedicated JavaScript method to clear cached credentials:
<script>document.execCommand('ClearAuthenticationCache');</script>
In modern browsers, cached credentials for basic authentication are typically cleared when clearing browsing history. Most browsers allow users to specifically clear only credentials, though the option may be hard to find, and typically clears credentials for all visited sites.
Protocol
Server side
When the server wants the user agent to authenticate itself towards the server after receiving an unauthenticated request, it must send a response with a HTTP 401 Unauthorized status line and a WWW-Authenticate header field.
The WWW-Authenticate header field for basic authentication is constructed as following:
WWW-Authenticate: Basic realm="User Visible Realm"
The server may choose to include the charset parameter from :
WWW-Authenticate: Basic realm="User Visible Realm", charset="UTF-8"
This parameter indicates that the server expects the client to use UTF-8 for encoding username and password (see below).
Client side
When the user agent wants to send authentication credentials to the server, it may use the Authorization header field.
The Authorization header field is constructed as follows:
The username and password are combined with a single colon (:). This means that the username itself cannot contain a colon.
The resulting string is encoded into an octet sequence. The character set to use for this encoding is by default unspecified, as long as it is compatible with US-ASCII, but the server may suggest use of UTF-8 by sending the charset parameter.
The resulting string is encoded using a variant of Base64 (+/ and with padding).
The authorization method and a space (e.g. "Basic ") is then prepended to the encoded string.
For example, if the browser uses Aladdin as the username and open sesame as the password, then the field's value is the Base64 encoding of Aladdin:open sesame, or QWxhZGRpbjpvcGVuIHNlc2FtZQ==. Then the Authorization header field will appear as:
Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
See also
Digest access authentication
HTTP+HTML form-based authentication
HTTP header
TLS-SRP, an alternative if one wants to avoid transmitting a password-equivalent to the server (even encrypted, like with TLS).
References and notes
External links
Hypertext Transfer Protocol
Computer access control protocols
de:HTTP-Authentifizierung#Basic Authentication. |
22627698 | https://en.wikipedia.org/wiki/Efecte | Efecte | Efecte is a Finnish software corporation that produces cloud-based solutions and related services to its customers. Its product range consists of IT departments’ ERP solutions, i.e. IT Service Management solutions, and Identity and Access Management software that is needed for signing into different IT systems.
The company was founded in 1998 and it was listed in Nasdaq First North Helsinki market place in 2017.
History
1998–2012
Jaan Apajalahti, Kristian Jaakkola, and Jussi Sarkkinen founded the company in 1998 by the name Bitmount Systems Oy.
In 2002, the name of the company was changed to Efecte Oy. The company’s first international subsidiary was founded in 2005.
In 2006, Efecte’s revenue growth was more than 60%, and it employed 70 people in Nordic countries.
In 2007, Efecte’s revenue was 6,3 million euros, and growth year-over-year was more than 60%. The rapid growth compared to the industry overall was due to the growth of market share in Finland and successful business in other Nordic countries. New offices were opened in Denmark and Norway. In ten years, Efecte had become a leading company in its field in the Nordics.
In 2008, Efecte employed 100 people in the Nordics, 70 in Finland and the rest in Sweden, Norway, and Denmark. It sought for new growth in the German market. About half of the company’s customer base was in the public sector and about half in the private sector. E.g., 11 out of the 15 biggest cities and municipalities in Finland were Efecte’s customers. Corporate revenue was almost 8 million euros.
The years after the financial crisis were difficult for the company. In 2009, the number of employees was radically reduced, and Efecte’s founders left the company’s operative leadership positions. New professional leadership was appointed, whose focus was in the software business. Following the new channel partner strategy, Efecte gave up its own sales organization and consulting business.
2013–
When the company's finances had been fixed by 2013, Efecte renewed its strategy. The decision was made to start producing software as a service. That means that using the solutions did not require installations or own servers from the customers. Cloud-based solutions had been tested already since 2009. Sakari Suhonen, previously deputy-CEO of the company, was nominated to be the next CEO. Moving from license sales to recurring monthly billing was a big transformation in the business model, in sales, and in the company’s culture.
The company focused its efforts on product development and started to build a cloud platform that would not be dependent on multinational cloud platform vendors like Microsoft or Amazon. Early 2013, Efecte acquired RM5 Software, which had concentrated on identity and access management software. Efecte re-started its own sales organization and gave up partner channel sales. The company also resumed its consulting business.
In 2015, Efecte launched a new solution platform, Efecte Edge. The company had approximately 200 customers. The company also migrated its old customers towards using its cloud solutions.
In 2016, the company’s revenue reached 8,3 million euros.
By 2017, Efecte’s customer number had reached approximately 300, and it had 80 employees. The number of employees in Germany tripled when Efecte hired a six-person team from its American competitor Cherwell. Efecte rationalized the unconventional growth strategy with the team knowing the market well and being more ready as employees than people coming from outside the industry. In December, the company was listed on Nasdaq First North Helsinki market place. The company’s turnover passed 10 million euros for the first time.
In July 2018, Niilo Fredrikson was nominated CEO after Sakari Suhonen, who had held the position for five years. During Suhonen’s tenure, the company renewed its product portfolio, moved to cloud solutions, and became a listed entity. During the previous year, the company had also hired dozens of new employees. Fredrikson started in his CEO role in September. International business accounted for 20% of the company’s turnover.
In spring 2019, the SaaS business accounted for 50% of the company’s net revenue.
Organization
Efecte corporation’s parent company is Efecte Plc. Its headquarters is located in Espoo, Finland, and it employs over 100 people. The company serves its Scandinavian customers from Stockholm, Sweden and German, Swiss, and Austrian customers from Munich, Germany. In Germany, Efecte collaborates with Bechtle GmbH IT System House Hamburg.
The company's CEO is Niilo Fredrikson.
Products
Efecte offers cloud-based IT Service Management and Identity and Access Management software and professional services that support them. With Efecte’s platform, customers and digitalize, manage and automate different services. Customers use the platform to manage, e.g., IT, HR and financial services, customer service, and access rights. The platform is also in use in facilities, contract management, and identity management.
Efecte’s customer base consists of different service organizations in large and mid-sized companies and public sector entities. In Finland, e.g., Mehiläinen, Musti Group, and Sarastia were Efecte'’s customers in 2019. In Switzerland, one of Efecte’s largest customers is the hotel chain Hotelplan Group that has over 1600 offices.
In 2019, 80% of the company’s revenue came from service management, and the remaining 20% from Identity and Access Management. Service management includes planning, delivering, managing, and developing organizations’ IT services. Identity and Access Management means the creation, management, and storing users' identities and access rights. Efecte’s ticketing systems are used in IT functions, but increasingly also in finance departments. Efecte’s platform has an integrated kanban board, on which tasks move as cards from left (To Do) to the right (Done), moving through different process phases. Efecte offers services that are related to its software products, e.g., implementation projects, integration work, training, and continuous development of customers’ environments. In 2015 Nordic customers included SSAB, Roskilde Municipality, Danske Bank, and Stena Sphere. In 2015 the company said other customers were companies such as Konecranes, Patria, DNA Oy, VR Group, Paulig, and Finnish cities Helsinki, Vantaa, Espoo, and Tampere.
Recognitions
In the Technology Fast 50 ranking published by Deloitte & Touche, Efecte ranked among the fastest growing Finnish technology companies in 2005, 2006, 2007 and 2008, making the top 10 list in 2005. In 2005, Efecte was also ranked as the 216th fastest growing technology company across EMEA.
In 2008, Efecte ranked 9th in Best Workplaces in Finland study conducted by Great Place to Work Institute. Efecte was also the only so-called gazelle company in an annual mapping of the Finnish software industry. According to the definition, a gazelle company grows at least 50% on three consequent years.
In 2020, Efecte was ranked Finland’s 10th best workplace in Great Place to Work Finland’s mid-sized company category.
References
Software companies of Finland |
2830383 | https://en.wikipedia.org/wiki/Philip%20Wadler | Philip Wadler | Philip Lee Wadler (born April 8, 1956) is an American computer scientist known for his contributions to programming language design and type theory. In particular, he has contributed to the theory behind functional programming and the use of monads in functional programming, the design of the purely functional language Haskell, and the XQuery declarative query language. In 1984, he created the Orwell programming language. Wadler was involved in adding generic types to Java 5.0. He is also author of the paper Theorems for free! that gave rise to much research on functional language optimization (see also Parametricity).
Education
Wadler received a Bachelor of Science degree in mathematics from Stanford University in 1977, and a Master of Science degree in Computer Science from Carnegie Mellon University in 1979. He completed his Doctor of Philosophy in Computer Science at Carnegie Mellon University in 1984. His thesis was entitled Listlessness is Better than Laziness and was supervised by Nico Habermann.
Research and career
Wadler's research interests are in programming languages.
Wadler was a research fellow at the Programming Research Group (part of the Oxford University Computing Laboratory) and St Cross College, Oxford during 1983–87. He was progressively lecturer, reader, and professor at the University of Glasgow from 1987 to 1996. Wadler was a member of technical staff at Bell Labs, Lucent Technologies (1996–99) and then at Avaya Labs (1999–2003). Since 2003, he has been professor of theoretical computer science in the School of Informatics at the University of Edinburgh.
Wadler was editor of the Journal of Functional Programming from 1990 to 2004. Wadler is currently working on a new functional language designed for writing web applications, called Links. He has supervised numerous doctoral students to completion.
Since 2003, Wadler has been a professor of theoretical computer science at the Laboratory for Foundations of Computer Science at the University of Edinburgh and is the chair of Theoretical Computer Science. He is also a member of the university's Blockchain Technology Laboratory. He has a h-index of 72 with 26,874 citations at Google Scholar. As of December 2018 Wadler was area leader for programming languages at IOHK, the blockchain engineering company developing Cardano.
Awards and honours
Wadler received the Most Influential POPL Paper Award in 2003 for the 1993 POPL Symposium paper Imperative Functional Programming, jointly with Simon Peyton Jones.
In 2005, he was elected Fellow of the Royal Society of Edinburgh. In 2007, he was inducted as an ACM Fellow by the Association for Computing Machinery (ACM).
References
External links
1956 births
Living people
Stanford University alumni
Carnegie Mellon University alumni
American computer scientists
British computer scientists
Members of the Department of Computer Science, University of Oxford
Fellows of St Cross College, Oxford
Academics of the University of Glasgow
Scientists at Bell Labs
Academics of the University of Edinburgh
Functional programming
Programming language researchers
Formal methods people
Academic journal editors
Computer science writers
American textbook writers
American male non-fiction writers
Fellows of the Royal Society of Edinburgh
Fellows of the Association for Computing Machinery
American expatriates in the United Kingdom
People associated with Cardano |
2286382 | https://en.wikipedia.org/wiki/Interrupt%20storm | Interrupt storm | In operating systems, an interrupt storm is an event during which a processor receives an inordinate number of interrupts that consume the majority of the processor's time. Interrupt storms are typically caused by hardware devices that do not support interrupt rate limiting.
Background
Because interrupt processing is typically a non-preemptible task in time-sharing operating systems, an interrupt storm will cause sluggish response to user input, or even appear to freeze the system completely. This state is commonly known as live lock. In such a state, the system is spending most of its resources processing interrupts instead of completing other work. To the end-user, it does not appear to be processing anything at all as there is often no output. An interrupt storm is sometimes mistaken for thrashing, since they both have similar symptoms (unresponsive or sluggish response to user input, little or no output).
Common causes include: misconfigured or faulty hardware, faulty device drivers, flaws in the operating system, or metastability in one or more components. The latter condition rarely occurs outside of prototype or amateur-built hardware.
Most modern hardware and operating systems have methods for mitigating the effect of an interrupt storm. For example, most Ethernet controllers implement interrupt "rate limiting", which causes the controller to wait a programmable amount of time between each interrupt it generates. When not present within the device, similar functionality is usually written into the device driver, and/or the operating system itself.
The most common cause is when a device "behind" another signals an interrupt to an APIC (Advanced Programmable Interrupt Controller). Most computer peripherals generate interrupts through an APIC as the number of interrupts is most always less (typically 15 for the modern PC) than the number of devices. The OS must then query each driver registered to that interrupt to ask if the interrupt originated from its hardware. Faulty drivers may always claim "yes", causing the OS to not query other drivers registered to that interrupt (only one interrupt can be processed at a time). The device which originally requested the interrupt therefore does not get its interrupt serviced, so a new interrupt is generated (or is not cleared) and the processor becomes swamped with continuous interrupt signals. Any operating system can live lock under an interrupt storm caused by such a fault. A kernel debugger can usually break the storm by unloading the faulty driver, allowing the driver "underneath" the faulty one to clear the interrupt, if user input is still possible.
As drivers are most often implemented by a 3rd party, most operating systems also have a polling mode that queries for pending interrupts at fixed intervals or in a round-robin fashion. This mode can be set globally, on a per-driver, per-interrupt basis, or dynamically if the OS detects a fault condition or excessive interrupt generation. A polling mode may be enabled dynamically when the number of interrupts or the resource use caused by an interrupt, passes certain thresholds. When these thresholds are no longer exceeded, an OS may then change the interrupting driver, interrupt, or interrupt handling globally, from an interrupt mode to a polling mode. Interrupt rate limiting in hardware usually negates the use of a polling mode, but can still happen during normal operation during intense I/O if the processor is unable switch contexts quickly enough to keep pace.
History
Perhaps the first interrupt storm occurred during the Apollo 11's lunar descent in 1969.
Considerations
Interrupt rate limiting must be carefully configured for optimum results. For example, an Ethernet controller with interrupt rate limiting will buffer the packets it receives from the network in between each interrupt. If the rate is set too low, the controller's buffer will overflow, and packets will be dropped. The rate must take into account how fast the buffer may fill between interrupts, and the interrupt latency between the interrupt and the transfer of the buffer to the system.
Interrupt mitigating
There are hardware-based and software-based approaches to the problem. For example, FreeBSD detects interrupt storms and masks problematic interrupts for some time in response.
The system used by NAPI is an example of the hardware-based approach: the system (driver) starts in interrupt enabled state, and the Interrupt handler then disables the interrupt and lets a thread/task handle the event(s) and then task polls the device, processing some number of events and enabling the interrupt.
Another interesting approach using hardware support is one where the device generates interrupt when the event queue state changes from "empty" to "not empty". Then, if there are no free DMA descriptors at the RX FIFO tail, the device drops the event. The event is then added to the tail and the FIFO entry is marked as occupied. If at that point entry (tail−1) is free (cleared), an interrupt will be generated (level interrupt) and the tail pointer will be incremented. If the hardware requires the interrupt be acknowledged, the CPU (interrupt handler) will do that, handle the valid DMA descriptors at the head, and return from the interrupt.
See also
Broadcast radiation
Inter-processor interrupt (IPI)
Non-maskable interrupt (NMI)
Programmable Interrupt Controller (PIC)
References
Interrupts
Software anomalies |
526711 | https://en.wikipedia.org/wiki/Lockheed%20HC-130 | Lockheed HC-130 | The Lockheed HC-130 is an extended-range, search and rescue (SAR)/combat search and rescue (CSAR) version of the C-130 Hercules military transport aircraft, with two different versions operated by two separate services in the U.S. armed forces.
The HC-130H Hercules and HC-130J Super Hercules versions are operated by the United States Coast Guard in a SAR and maritime reconnaissance role.
The HC-130P Combat King and HC-130J Combat King II variants are operated by the United States Air Force for long-range SAR and CSAR. The USAF variants also execute on scene CSAR command and control, airdrop pararescue forces and equipment, and are also capable of providing aerial refueling to appropriately equipped USAF, US Army, USN, USMC, and NATO/Allied helicopters in flight. In this latter role, they are primarily used to extend the range and endurance of combat search and rescue helicopters.
In July 2015, it was announced that the U.S. Forest Service will be receiving some of the U.S. Coast Guard's HC-130H aircraft to use as aerial fire retardant drop tankers as the Coast Guard replaces the HC-130H with additional HC-130J and HC-27J Spartan aircraft, the latter being received from the Air National Guard as part of a USAF-directed divestment of the C-27.
Development
The United States Coast Guard was the first recipient of the HC-130 variant. In keeping with the USN/USMC/USCG designation system of the time, the designation for the first order in 1958 was R8V-1G, but with the introduction of the Tri-Service aircraft designation system for commonality with the US Army and USAF in 1962, this was eventually changed to HC-130B. Six USCG HC-130E aircraft were produced in 1964, but production soon switched to the new C-130H platform which was entering service. The first HC-130H flew on 8 December 1964 and the USCG still operates this aircraft.
First flown in 1964, the USAF HC-130P Combat King aircraft has served many roles and missions. Based on the USAF C-130E airframe, it was modified to conduct search and rescue missions, provide a command and control platform, conduct in-flight refueling of helicopters, and carry supplemental fuel in additional internal cargo bay fuel tanks for extending range or air refueling. They were also originally modified to employ the Fulton surface-to-air recovery system, although this system has since been discontinued and the specialized equipment removed. The HC-130N was a follow-up order without the Fulton recovery system and all USAF extant HC-130Ps have since had their Fulton recovery systems removed.
Role
USAF HC-130P/N Combat King
The USAF HC-130P/N, also known as the Combat King aircraft, can fly in the day against a reduced threat; however, crews normally fly night, low-level, air refueling and airdrop operations using night vision goggles (NVG). The aircraft can routinely fly low-level NVG tactical flight profiles to avoid detection. To enhance the probability of mission success and survivability near populated areas, USAF HC-130 crews employ tactics that include incorporating no external lighting or communications and avoiding radar and weapons detection.
Secondary mission capabilities include performing tactical airdrops of pararescue specialist teams, small bundles, zodiac watercraft, or four-wheel drive all-terrain vehicles; and providing direct assistance to a survivor in advance of the arrival of a recovery vehicle. Other capabilities are extended visual and electronic searches over land or water, tactical airborne radar approaches and unimproved airfield operations. A team of three Pararescuemen (PJ's), trained in emergency trauma medicine, harsh environment survival and assisted evasion techniques, is part of the basic mission crew complement.
Up until 2016, HC-130P/N aircraft of the Combat Air Forces (CAF) were a combination of mid to late-1960s vintage aircraft based on C-130E airframes and mid-1990s vintage aircraft based on C-130H3 airframes. All underwent extensive modifications. These modifications included night vision-compatible interior and exterior lighting, a personnel locator system compatible with aircrew survival radios, improved digital low-power color radar and forward-looking infrared systems. As of 2018, with the exception of a handful of extant aircraft in the Air National Guard, all remaining HC-130P/N aircraft are operated by the Air Force Reserve Command.
USCG HC-130H
The HC-130H first flew on 8 December 1964. The Coast Guard began equipping with the HC-130H in the late sixties and early seventies,.
U.S. Coast Guard HC-130Hs were primarily acquired for long-range overwater search missions, support airlift, maritime patrol, North Atlantic Ice Patrol and command and control of search and rescue, replacing previously operated HU-16 Albatross amphibious and HC-123 Provider land-based aircraft. Like their USAF counterparts, USCG HC-130s also have the capability of air dropping rescue equipment to survivors at sea or over open terrain. They carried additional equipment and two 1,800-gallon fuel bladders in the cargo compartment.
USAF HC-130P Combat Shadow
The MC-130P Combat Shadow series of aircraft initially entered service in December 1965 during the Vietnam War as the HC-130H CROWN airborne controller. The CROWN airborne controllers located downed aircrew and directed Combat Search and Rescue operations over North Vietnam. In mid-1966 flight testing began of rescue helicopters equipped with aerial refueling receivers, and 11 of the controller aircraft were modified as tankers and redesignated the HC-130P SAR Command and Control/vertical lift (helicopter) aerial refueling aircraft, entering service in Southeast Asia in November 1966. Originally assigned to the Tactical Air Command (TAC) and then the Military Airlift Command (MAC), Combat Shadows have been part of the Air Force Special Operations Command (AFSOC) since that command's establishment in 1993. In February 1996, AFSOC's 28-aircraft HC-130P tanker fleet was redesignated the MC-130P Combat Shadow, aligning the variant with AFSOC's other M-series special operations mission aircraft. At the same time as this redesignation, USAF continued to field HC-130P/N aircraft as dedicated CSAR platforms under the Air Combat Command (ACC) and in ACC or PACAF-gained CSAR units in the Air Force Reserve and Air National Guard.
USCG HC-130J
The new HC-130J aircraft are derived from the Lockheed Martin KC-130J tanker operated by the U.S. Marine Corps. The USCG has six HC-130Js in service, but they are not capable of refueling helicopters in flight. The first delivery of this variant to the United States Coast Guard was in October 2003. They initially operated in a logistic support role until they received significant modifications, including installations of a large window on each side of the fuselage to allow crew members to visually scan the sea surface, the addition of an inverse synthetic aperture sea search radar, flare tubes, a forward-looking infrared/electro-optical sensor, a gaseous oxygen system for the crew and an enhanced communications suite. Aircraft are installed with the Minotaur Mission System and incorporates sensors; radar; and command, control, communications, computers, intelligence, surveillance and reconnaissance equipment and enables aircrews to gather and process surveillance information that can be transmitted to other platforms and units during flight.
The first of these modified Coast Guard HC-130Js was delivered in March 2008 and complete delivered in September 2019. The 17th HC-130J for the United States Coast Guard is expected to be delivered in 2024.
The Coast Guard is acquiring a fleet of 22 new, fully missionized HC-130J aircraft to replace its legacy HC-130Hs.
USAF HC-130J Combat King II
The USAF HC-130J Combat King II combat rescue variant has modifications for in-flight refueling of helicopters and tilt-rotor aircraft, including refueling pods on underwing pylons and additional internal fuel tanks in the cargo bay. The HC-130J Combat King II is also capable of itself being refueled in flight by boom-equipped tankers such as the KC-135, KC-10 and KC-46.
Lockheed Martin officials conducted the first flight of the USAF HC-130J version on 29 July 2010. The first HC-130J was delivered to the USAF in September 2010, but underwent further testing before achieving Initial Operational Capability (IOC) in 2012.
The HC-130J personnel recovery aircraft completed developmental testing on 14 March 2011. The final test point was air-to-air refueling, and was the first ever boom refueling of a C-130 where the aircraft's refueling receiver was installed during aircraft production. This test procedure also applied to the MC-130J Combat Shadow II aircraft in production for Air Force Special Operations Command.
Given the advancing age of its current HC-130P/N airframes, all of which are based on either the venerable (and since retired) mid/late-1960s vintage C-130E airframe or the more recent mid-1990s vintage C-130H2/H3 airframe, the Air Force plans to eventually buy up to 78 HC-130J Combat King IIs to equip rescue squadrons in the active Air Force, the Air Force Reserve Command and the Air National Guard. The first HC-130J was delivered to the 563d Rescue Group at Davis-Monthan Air Force Base, Arizona on 15 November 2012.
The US Air Force Reserve received its first HC-130J on 2 April 2020 when it was delivered to the 920th Rescue Wing's 39th Rescue Squadron at Patrick Air Force Base in Florida.
Operational history
U.S. Coast Guard operations
The United States Coast Guard operates 18 HC-130H aircraft from three bases around the United States:
CGAS Clearwater, Florida
CGAS Kodiak, Alaska
CGAS Barbers Point (formerly NAS Barbers Point), Hawaii
These aircraft are used for search and rescue, enforcement of laws and treaties, illegal drug interdiction, marine environmental protection, military readiness, International Ice Patrol missions, as well as cargo and personnel transport.
The Coast Guard also currently operates an additional 9 HC-130J aircraft from CGAS Elizabeth City, North Carolina.
Neither the HC-130H nor the HC-130J in their U.S. Coast Guard variants are equipped for the aerial refueling of helicopters.
U.S. Air Force operations
The HC-130P (to include HC-130P/N) is primarily based on the C-130E airlift aircraft, with a smaller number based on the C-130H. The USAF HC-130J is a newly manufactured aircraft. As the dedicated fixed-wing combat search and rescue platform in the USAF inventory, the HC-130 is operated by the following units:
Air Combat Command
347th Rescue Group (347 RQG), 71st Rescue Squadron (71 RQS), Moody AFB, Georgia – HC-130J
563d Rescue Group (563 RQG), 79th Rescue Squadron (79 RQS), Davis-Monthan AFB, Arizona – HC-130J
Air Education and Training Command
58th Special Operations Wing (58 SOW), Kirtland AFB, New Mexico
415th Special Operations Squadron (415 SOS) – HC-130J
Air Force Reserve Command
920th Rescue Wing (920 RQW), 39th Rescue Squadron (39 RQS), Patrick Space Force Base, Florida – HC-130P/N (transitions to HC-130J FY20/21)
Air National Guard
106th Rescue Wing (106 RQW), 102d Rescue Squadron (102 RQS), New York Air National Guard, Francis S. Gabreski Air National Guard Base, New York – HC-130J
129th Rescue Wing (129 RQW), 130th Rescue Squadron (130 RQS), California Air National Guard, Moffett Federal Airfield, California – HC-130J
176th Wing (176 WG), 211th Rescue Squadron (211 RQS), Alaska Air National Guard, Joint Base Elmendorf-Richardson, Alaska – HC-130J
HC-130s were assigned to the Air Combat Command (ACC) from 1992 to 2003, to include those Air Force Reserve Command and Air National Guard rescue units operationally-gained by ACC. Prior to 1992, they were assigned to the Air Rescue Service as part of Military Airlift Command (MAC). In October 2003, operational responsibility for the Continental United States (CONUS) and Alaskan air search and rescue (SAR) mission, as well as the worldwide combat search and rescue (CSAR) mission was transferred to the Air Force Special Operations Command (AFSOC) at Hurlburt Field, Florida.
In October 2006, all USAF CSAR forces were reassigned back to Air Combat Command with the exception of those Alaska Air National Guard CSAR assets which were transferred to the operational claimancy of Pacific Air Forces (PACAF). The CONUS and Alaska SAR missions were also transferred back to ACC and PACAF, respectively. However, the Air Force Rescue Coordination Center (AFRCC) that had been previously located at McClellan Air Force Base, California and Scott Air Force Base, Illinois under MAC and at Langley Air Force Base, Virginia under ACC, was relocated to Tyndall Air Force Base, Florida under the control of 1st Air Force (1 AF), the USAF component command to U.S. Northern Command (USNORTHCOM) and ACC's numbered air force for the Air National Guard.
While under AFSOC and since returning to ACC and PACAF, USAF, AFRC and ANG HC-130s have been deployed to Italy, Kyrgyzstan, Kuwait, Pakistan, Saudi Arabia, Turkey, Uzbekistan, Djibouti, Iraq, Afghanistan, and Greece in support of Operations Southern and Northern Watch, Operation Allied Force, Operation Enduring Freedom, Operation Iraqi Freedom, and Operation Unified Protector. HC-130s also support continuous alert commitments in Alaska, and provided rescue coverage for NASA Space Shuttle operations in Florida until that program's termination in 2011.
The USAF's first HC-130Js gained initial operating capability (IOC) in April 2013, permitting retirement of the first group of HC-130P aircraft based on C-130E airframes that were built in the mid and late 1960s. The first HC-130J was delivered by Lockheed Martin to Air Combat Command on 23 September 2010 for testing.
In 2009, there were HC-130P aircraft operated by the Air National Guard, and 10 by the Air Force Reserve Command. As of 2019, unofficial estimates place the number of HC-130Ps remaining at 6 airframes, all assigned to Air Force Reserve Command.
World's longest turboprop aircraft distance record
On 20 February 1972, Lieutenant Colonel Edgar Allison, USAF, and his flight crew set a recognized turboprop aircraft class record of for a great circle distance without landing. The USAF Lockheed HC-130H was flown from Ching Chuan Kang Air Base, Republic of China (Taiwan), to Scott AFB, Illinois in the United States. As of 2018, this record still stands more than 40 years later.
Variants
HC-130B
Rescue version of the C-130B for United States Coast Guard (USCG) introduced in 1959, formerly R8V-1G and SC-130B.
HC-130E
Modified rescue version of the C-130E for USCG, six were produced in 1964.
HC-130H
Combat rescue version of the C-130E and C-130H for the United States Air Force (USAF) and enhanced SAR version for the USCG, with Fulton surface-to-air recovery system installed in USAF versions; many USAF versions later updated to HC-130P standard.
HC-130P Combat King
Extended range version of the HC-130H, modified for in-flight refueling of helicopters, refueling pods on underwing pylons, and additional internal fuel tanks in the cargo bay. Initial examples in series based on C-130E airframe until late 1960s. Later examples built in the 1980s and 1990s based on C-130H airframe.
HC-130P/N Combat King
Additional order of new HC-130Ps without Fulton surface-to-air recovery system or existing HC-130Ps with Fulton system removed.
HC-130J
Modified rescue version of the C-130J for USCG.
HC-130J Combat King II
USAF combat rescue variant of the C-130J with changes for in-flight refueling of helicopters, including refueling pods on underwing pylons and capabilities to receive fuel inflight from boom-equipped tankers. The USAF HC-130J eliminates the enlisted Flight Engineer position, but unlike the USAF C-130J airlift version, still retains a Combat Systems Officer/Navigator position.
Operators
United States Air Force
United States Coast Guard
United States Forest Service
Specifications (HC-130H)
See also
References
External links
C-130, H
C-130, H
1960s United States military rescue aircraft
Four-engined tractor aircraft
High-wing aircraft
Four-engined turboprop aircraft
Air refueling
HC-130 |
6291245 | https://en.wikipedia.org/wiki/Steampacket | Steampacket | Steampacket (sometimes shown as Steam Packet) were a British blues band formed in 1965 by Long John Baldry with Rod Stewart, Julie Driscoll, and organist Brian Auger.
History
A musical revue rather than a single group, Steampacket was formed in 1965 by Long John Baldry after the break-up of his previous group the Hoochie Coochie Men. It included Rod Stewart who had been with Baldry in the Hoochie Coochie Men, vocalist Julie Driscoll, organist Brian Auger and guitarist Vic Briggs. They were managed by Giorgio Gomelsky, who had previously been involved with the Rolling Stones and the Yardbirds.
Steampacket played at various clubs, theatres and student unions around the country, including supporting the Rolling Stones on their 1965 British tour. Because of contractual difficulties, however, they never formally recorded a studio or live album. Tracks from some demo tapes they recorded at a rehearsal in the Marquee Club were released in 1970 on the French label BYG as Rock Generation: Volume 6 - The Steampacket (Or the First Supergroup). The same material was later re-released under other titles, including First of the Supergroups: Early Days and The First Supergroup: Steampacket Featuring Rod Stewart, to cash in on Stewart's success.
Aftermath
Stewart left in early 1966, followed by Long John Baldry a few months later, and the group disbanded soon after. Long John Baldry then joined Bluesology which included a then unknown Elton John on keyboards, before pursuing a solo career, having a number 1 hit record in the UK Singles Chart in 1967 with "Let the Heartaches Begin". Julie Driscoll, Brian Auger and Vic Briggs formed Trinity, with Briggs departing later in 1966 to join Eric Burdon and The Animals. Julie Driscoll, Brian Auger and The Trinity had a UK hit in 1968 with "This Wheel's on Fire". Rod Stewart later sang with the Jeff Beck Group, the Faces and as a solo artist. There is an urban legend that Peter Green and Mick Fleetwood, later of Fleetwood Mac, played with Steampacket. In fact Steampacket, with the exception of Rod Stewart's departure, had the same personnel from its inception to its disintegration. The group that Green and Fleetwood played in alongside Rod Stewart was Shotgun Express.
Lineup
Long John Baldry - vocals
Rod Stewart - vocals
Julie Driscoll - vocals
Brian Auger - organ
Vic Briggs - guitar
Richard Brown aka Ricky Fenson - bass guitar
Micky Waller - drums
References
Further reading
Paul Myers: Long John Baldry and the Birth of the British Blues, Vancouver 2007 - GreyStone Books
Musical groups established in 1965
Musical groups disestablished in 1966
Rod Stewart
English rock music groups
British blues musical groups
British rhythm and blues boom musicians
1965 establishments in England
1966 disestablishments in England |
317400 | https://en.wikipedia.org/wiki/WebObjects | WebObjects | WebObjects is a Java web application server and a server-based web application framework originally developed by NeXT Software, Inc.
WebObject's hallmark features are its object-orientation, database connectivity, and prototyping tools. Applications created with WebObjects can be deployed as web sites, Java WebStart desktop applications, and/or standards-based web services.
The deployment runtime is pure Java, allowing developers to deploy WebObjects applications on platforms that support Java. One can use the included WebObjects Java SE application server or deploy on third-party Java EE application servers such as JBoss, Apache Tomcat, WebLogic Server or IBM WebSphere.
WebObjects was maintained by Apple for quite a while. However, because Apple has stopped maintaining the software, it now is instead maintained by an online community of volunteers. This community calls it "Project Wonder".
WebObjects now also has a few competitors: see below.
History
NeXT creates WebObjects
WebObjects was created by NeXT Software, Inc., first publicly demonstrated at the Object World conference in 1995 and released to the public in March 1996. The time and cost benefits of rapid, object-oriented development attracted major corporations to WebObjects in the early days of e-commerce, with clients including BBC News, Dell Computer, Disney, DreamWorks SKG, Fannie Mae, GE Capital, Merrill Lynch, and Motorola.
Apple acquires NeXT, and continues to maintain the software
Following NeXT's merger into Apple Inc. in 1997, WebObjects' public profile languished. Many early adopters later switched to alternative technologies, and currently Apple remains the biggest client for the software, relying on it to power parts of its online Apple Store and the iTunes Store — WebObjects' highest-profile implementation.
WebObjects was part of Apple's strategy of using software to drive hardware sales, and in 2000 the price was lowered from $50,000 (for the full deployment license) to $699. From May 2001, WebObjects was included with Mac OS X Server, and no longer required a license key for development or deployment.
WebObjects transitioned from a stand-alone product to be a part of Mac OS X with the release of version 5.3 in June 2005. The developer tools and frameworks, which previously sold for US$699, were bundled with Apple's Xcode IDE. Support for other platforms, such as Windows, was then discontinued. Apple said that it would further integrate WebObjects development tools with Xcode in future releases. This included a new EOModeler Plugin for Xcode. This strategy, however, was not pursued further.
In 2006, Apple announced the deprecation of Mac OS X's Cocoa-Java bridge with the release of Xcode 2.4 at the August 2006 Worldwide Developers Conference, and with it all dependent features, including the entire suite of WebObjects developer applications: EOModeler, EOModeler Plugin, WebObjects Builder, WebServices Assistant, RuleEditor and WOALauncher. Apple had decided to concentrate its engineering resources on the runtime engine of WebObjects, leaving the future responsibility for developer applications with the open-source community. The main open-source alternative — the Eclipse IDE with the WOLips suite of plugins — had matured to such an extent that its capabilities had, in many areas, surpassed those of Apple's own tools, which had not seen significant updates for a number of years.
Apple promised to provide assistance to the community in its efforts to extend such tools and develop new ones. In a posting to the webobjects-dev mailing list, Daryl Lee from Apple's WebObjects team publicly disclosed the company's new strategy for WebObjects. It promised to "make WebObjects the best server-side runtime environment" by:
Improving performance, manageability, and standards compliance
Making WebObjects work well with Ant and the most popular IDEs, including Xcode and Eclipse
Opening and making public all standards and formats that WebObjects depends upon
WebObjects 5.4, which shipped with Mac OS X Leopard in October 2007, removed the license key requirement for both development and deployment of WebObjects applications on all platforms. All methods for checking license limitations were then deprecated.
The end of WebObjects, and the beginning of Project Wonder
In 2009, Apple stopped issuing new releases of WebObjects outside Apple. The community decided to continue development with Project Wonder, an open-source framework which is built on top of the core WebObjects frameworks and which extends them. For example, Project Wonder has updated development tools and provides a REST framework that was not part of the original WebObjects package.
Though once included in the default installation of Mac OS X Server, WebObjects was no longer installed by default starting with Mac OS X Snow Leopard Server and shortly after, Apple ceased promoting or selling WebObjects. As of 2016, WebObjects is actively supported by its developer community, the "WOCommunity Association", by extending the core frameworks and providing fixes with Project Wonder. The organization last held a Worldwide WebObjects Developer Conference, WOWODC, in 2013.
In May 2016, Apple confirmed that WebObjects had been discontinued.
Tools
As of 2016 most WebObjects architects and engineers are using the tools being developed by the WebObjects community. These tools run within the Eclipse IDE and are open-source. The WebObjects plug-ins for Eclipse are known as WOLips.
Building WebObjects frameworks and applications for deployment is typically achieved using the WOProject set of tools for Apache Ant or Apache Maven. These tools are distributed with WOLips.
Core frameworks
A WebObjects application is essentially a server-side executable, created by combining prebuilt application framework objects with the developer's own custom code. WebObjects' frameworks can be broken down into three core parts:
The WebObjects Framework (WOF) is at the highest level of the system. It is responsible for the application's user interface and state management. It uses a template-based approach to take that object graph and turn it into HTML, or other tag-based information display standards, such as XML or SMIL. It provides an environment where you can use and create reusable components. Components are chunks of presentation (HTML) and functionality (Java code) often with a parameter list to enhance reusability. WebObjects Builder is used to create the HTML-templates and creates the linking, for instance, a Java String object to interface objects like an input field in a web form.
The Enterprise Objects Framework (EOF) is, perhaps, the hallmark feature of WebObjects. EOF communicates with relational databases and turns database rows into an object graph. Using EOModeler the developer can create an abstraction of the database in the forms of Java objects. In order to access or insert information into the database the developer simply accesses the Java Enterprise Objects (EOs) from their business logic. After that EOF manages the Enterprise Objects and automatically creates the required SQL-code to commit the changes to the database.
Java Foundation. Both Enterprise Objects and WebObjects rest on the aptly named Java Foundation classes. This framework contains the fundamental data structure implementations and utilities used throughout the rest of WebObjects. Examples include basic value and collection classes, such as arrays, dictionaries (objects that contain key-value pairs) and formatting classes. Java Foundation is similar to the Foundation framework contained in Apple's Cocoa API for macOS desktop applications, however Java Foundation is written in Pure Java as opposed to Cocoa's Objective-C (with its Java bridge runtime wrapper). Foundation classes are prefixed with the letters "NS" (a reference to their NeXTSTEP OS heritage). Since the transition of WebObjects to Java in 2000, the functionality of many of Apple's Java Foundation classes is replicated in Sun's own JDK. However, they persist largely for reasons of backwards-compatibility and developers are free to use whichever frameworks they prefer.
Rules-Based Rapid Application Development (RBRAD)
WebObjects features a set of rapid development technologies that can automatically create a Web application without the need to write any Java code. Given a model file for a database, WebObjects will create an interface supporting nine common database tasks, including querying, editing and listing. Such applications are useful for prototyping or administering a database, perhaps to check relationships or to seed the database with data.
The user interface is generated dynamically, on-the-fly at runtime using a rules-based system—no code is generated. Consequently, one can modify an application's configuration at runtime (using an assistant program) without recompiling or relaunching the application.
Developers can utilize one of three different technologies, depending upon the type of interface they wish to employ:
Direct To Web (D2W) allows developers to rapidly create an HTML-based Web application that accesses a database.
Direct To Java Client allows developers to rapidly create a client desktop application using the Java Swing toolkit. An advantage of Java Client applications is that they can take advantage of the processing power of the client computer to perform operations such as sorting a list of items received from the server.
Direct To Web Services allows developers to rapidly develop Web service-based applications that provide access to a data store.
Advantages of RBRAD
Vastly decreased development and debugging time;
Increased stability through the use of highly exercised code;
By using the information contained in the data model file, applications will not violate database integrity. Normally you would have to write code to avoid such situations and handle errors generated by bad data;
Fully utilizes the validation services provided by WebObjects and Enterprise Objects.
Java compatibility
WebObjects is a 100% Java product with the following Java-based features:
Deployment: Applications can be deployed on any operating system that has Java 1.3 or later. Many developers have successfully deployed on Windows and various Linux systems such as Red Hat Linux, Debian and SUSE. Applications can also be hosted on any Java EE compatible application server such as JBoss.
Java EE integration: WebObjects applications can be packaged in a single directory (an exploded .war file) that make it easier to deploy to a Java EE servlet container.
JDBC: Since WebObjects uses JDBC for database connectivity any DBMS that has a JDBC-driver can be used within WebObjects.
Swing interface: WebObjects applications can be delivered to the user as a "Java Client application" or as a Java applet.
Version history
WebObjects was originally released by NeXT Computer in March 1996, but was acquired by Apple Inc. with their acquisition of NeXT in December of that year.
1.0 — March 28, 1996
Debut release.
2.0 — June 25, 1996
Pre-release version of WebObjects Builder application.
3.0 — November 1996
3.1
Supports a subset of the Java APIs (NT only).
3.5 — December 1997
Enhanced Java support (NT only): all objects and components can be worked on as a set of Java APIs based on a complete implementation of the JDK 1.1.3.
4.0 — September 1998
First version of WebObjects to run on the Mac platform — specifically Mac OS X Server 1.0 (a public release of the beta OS formerly code-named 'Rhapsody').
OPENSTEP 4.2 OS no longer supported; Windows NT now uses a new version of the OpenStep base of libraries and binary support called Yellow Box.
Direct actions introduced whereby actions can be sent directly to an object that can handle it, allowing for simpler, static URLs.
Direct to Web code-free development assistant introduced.
WebObjects and Enterprise Objects Framework provide thread-safe APIs. This means that you can write a multithreaded WebObjects application where you couldn't before. This enables applications that can provide user feedback for long-running requests.
Better tools for managing, configuring and testing the scalability of applications.
Java capabilities are greatly improved over previous version, however compiled Objective-C is still two to three times faster;
Possible to build a fully capable Java client either as a stand-alone app or as an applet with the Interface Builder - all sorts of Swing and Java Bean components are sitting on IB palettes for wiring up.
Developers can now debug applications on a machine that doesn't have a web server present.
EOF 3.0 adds support for a new database, OpenBase Lite, which ships with EOF 3.0 as an unsupported demo.
EOF 3.0 introduces new API, mainly in EOUtilities, to facilitate common programming tasks.
EOModeler adds support for prototype attributes and the ability to create and store complex queries (or EOFetchSpecifications).
4.5 —; March 2000
Integrated XML support using IBM's alphaWorks parser.
New WebObjects Builder interface, specifically in the main window toolbar, the user interface for binding keys, and the table editing user interface. A path view, an API editor, and component validation have been added.
Application profiling tools.
EOF 4.5 comes with a new sample adaptor: the LDAP adaptor.
Direct to Web now allows you to create your own visual style and exposes a great deal of new API.
Java Client extended considerably, including a new user interface generation layer, Direct to Java Client.
4.5.1
First version to support Mac OS X 10.x and Windows 2000.
Last version to support HP-UX and Mac OS X Server 1.0.
Last version that supported the Objective-C API.
5.0 — May 2001
Major rewrite from Objective-C to Java.
5.1 — January 10, 2002
Create and deploy Enterprise JavaBeans using the built-in container based on OpenEJB.
Deploy WebObjects applications as JSPs or Servlets on top of third-party application servers.
Access and manipulate data stored in JNDI or LDAP directory services.
Automatically generate desktop Java client applications with rich, interactive user interfaces.
Utilize the WebObjects template engine and object-relational mapping for seamless XML messaging.
5.1.2 — May 7, 2002
Contains general bug fixes for WebObjects 5.1 on all platforms.
5.1.3 — June 7, 2002
Contains targeted bug fixes for WebObjects 5.1 on all platforms.
5.1.4 — August 22, 2002
Compatibility with Mac OS X 10.2.
5.2 — November 12, 2002
Web Services support.
Improvements to Java EE integration
Java Web Start support.
Improvements to robustness and stability of Enterprise Objects.
Major bug fixes led many developers to hail this as the first stable 5.x release of WebObjects.
5.2.1 — March 21, 2003
Resolved some incompatibilities with the latest Java 1.4.1 implementation for Mac OS X.
5.2.2 — October 22, 2003
Compatibility with Mac OS X 10.3 Panther and the Xcode IDE.
JBoss on Panther Server qualification.
Qualified for Java 1.4.1.
Fixes for EOF runtime and WOFileUpload.
5.2.3 — March 16, 2004
Performance and stability update addressing issues with CLOSE_WAIT states in deployment using JavaMonitor and wotaskd and issues related to EOF under high load.
Qualified for Java 1.4.2.
5.2.4 — May 2, 2005
Compatibility with Mac OS X 10.4 and the Xcode 2.0 IDE.
5.3 (developer) for Mac OS X 10.4 — June 6, 2005
WebObjects developer tools included free with the Xcode IDE (v2.1).
Development and deployment on platforms other than Mac OS X no longer supported by Apple.
EOModels can be created and edited within Xcode with a new EOModeler plugin that integrates with the CoreData modeling tools.
WebObjects Builder has UI enhancements and generates HTML 4.0.1 code.
WebObjects runtime now supports HTML 4.0.1.
NSArray, NSDictionary and NSSet now implement the interfaces.
Axis 1.1 integrated with the Direct To WebServices feature.
WebObjects is qualified against Oracle 10g using the 10.1.0.2 JDBC drivers; Microsoft SQL Server 2000 8.00.194; MySQL 4.1.10a; OpenBase 8.0; Oracle 9i Enterprise Edition Sybase ASE 12.5
5.3 (deployment) for Mac OS X Server 10.4 — June 23, 2005
Installer updates the Application Server components in Mac OS X Server 10.4 to WebObjects 5.3.
5.3.1 — November 10, 2005
Addresses incompatibilities with Xcode 2.2 Developer tools on Mac OS X 10.4.
Adds a modified Developer tools license that allows WebObjects applications developed with Xcode 2.2 to be deployed on any compatible platform. The license is available at /System/Library/Frameworks/JavaWebObjects.framework/Resources/License.key after installation.
Adds better SQL Generation in the EOModeler Plug-in design tool in Xcode.
Improved FetchSpecification building in the EOModeler Plugin design tool in Xcode.
Adds a "components and elements" window for improved workflow in WebObjects Builder.
Bug fixes.
5.3.2 — August 7, 2006
Addresses incompatibilities with Xcode 2.4 Developer tools on Mac OS X 10.4.
Security improvements.
Other improvements.
As part of the simultaneous release of Xcode 2.4, the Cocoa Java bridge is deprecated along with the following WebObjects applications: EOModeler, EOModeler Plugin, WebObjects Builder, WebServices Assistant, RuleEditor and WOALauncher.
5.3.3 — February 15, 2007
"WebObjects DST Update": Updates WebObjects 5.3 systems to observe the Daylight Saving Time (DST) changes due to come into effect in March 2007 in many countries, including the United States and Canada. Uses the latest DST and time zone information available as of January 8, 2007.
5.4 — October 26, 2007
License key no longer required or supported
Deprecations: Java Client Nib based applications, Direct to JavaClient based applications, EOCocoaClient based applications, OpenBase no longer example database, Tools (EOModeler, WebObjects Builder, Rule editor)
Combined Component Template Parser that reduces .wo components to single .html files
Generation of XHTML compliant pages
AJAX request handler for enhanced page caching
Added support for secure URL generation
JMX monitoring support
Entity index management in the model
Improved the synchronization with the database
Added support for index generation
Support for enum in attribute conversion
Improved support for vendor specific prototypes (EOJDBCOraclePrototype, EOJDBCFrontBasePrototype, etc.)
Derby support (Embedded database)
Support for Generics
WebServices update (Axis 1.4)
Full support for Apple XML plist (Read and Write)
Ant build support
Open Specifications
5.4.1 — February 11, 2008
"WebObjects 5.4.1 is an update release for the version of WebObjects included in the Mac OS X Leopard tools. This release fixes several bugs in areas such as web services serialization, deployment tools, and database compatibility, among others. This update can be installed on Mac OS X 10.5 Leopard."
Fixed bugs in web services serialization, deployment, databases.
5.4.2 — July 11, 2008
Addresses WOComponent parser issues
Includes WebServices data types and API changes
Includes EOF SQL Generation fixes
Resolves additional issues
5.4.3 — September 15, 2008
EOF Database snapshot not updating
Webassistant not available for D2W apps
Exceptions when using WOTextField with formatters
Duplicate primary keys generated by FrontBase JDBC Adaptor under load
Additional issue fixes
WOWODC
Since 2007, the community has held an annual conference for WebObjects developers, WOWODC. In 2007 and 2008, the conference was held the weekend before WWDC, and in 2009, the community promoted two conferences: WOWODC West in San Francisco on June 6 and 7, immediately before WWDC, and WOWODC East in Montreal on August 29 and 30. WOWODC 2010 was held in Montreal on August 27, 28 and 29, 2010. WOWODC 2011 was held in Montreal on July 1, 2 and 3 in 2011. WOWODC 2012 was held in Montreal on June 30, July 1 and 2, 2012. WOWODC 2013 was held in Montreal. WOWODC 2014 was held in Montreal (April 12, 13 and 14). WOWODC 2015 was held in Hamburg on April 25, 26 and 27. WOWODC 2016 was held in Montréal on June 24, 25 and 26
Open-source alternatives
Interest in OpenSource alternatives to WebObjects that use the Objective-C language grew with WebObjects' move from Objective-C (last version WO 4.5.1) to Java (first version WO 5.0). The two frameworks available are SOPE, which has been used as the basis of the OpenGroupware.org groupware server for about eight years, and GNUstepWeb, which is part of the GNUstep project. Open-source rewrites of the EOF frameworks also exist (AJRDatabase, GDL2).
There are also Java-based alternatives:
Wotonomy is a project, hosted on SourceForge, that implements a clean-room, open-source version of the WebObjects 5.x system. It provides a near-complete implementation of the MVC web-framework, as well as partial implementations of Foundation, Control, and Data layers, and other features. It is sufficiently functional for low-transaction volume, single-source database applications. While the project's structure was re-organized in 2006 around an Apache Maven build infrastructure and migrated to the Subversion revision control system, there has not been any substantial update to the codebase since 2003.
Apache Tapestry has a design and philosophy similar to that of WebObjects. Tapestry is frequently combined with Apache Cayenne, a persistence framework inspired by EOF.
GETobjects is another framework with an API similar to WebObjects 5.x that is related to SOPE.
An attempt to do a Swift version based on SOPE / GETobjects is available as SwiftObjects. The implementation for Swift 4 is limited due to the reflection capabilities of that Swift version.
See also
Comparison of application servers
Comparison of web frameworks
UM.SiteMaker
References
External links
WebObjects at Apple Developer (Archived from the original)
Official WebObjects Community Website
Apple Inc. software
Java enterprise platform
Web frameworks |
59529 | https://en.wikipedia.org/wiki/Solubility%20equilibrium | Solubility equilibrium | Solubility equilibrium is a type of dynamic equilibrium that exists when a chemical compound in the solid state is in chemical equilibrium with a solution of that compound. The solid may dissolve unchanged, with dissociation or with chemical reaction with another constituent of the solution, such as acid or alkali. Each solubility equilibrium is characterized by a temperature-dependent solubility product which functions like an equilibrium constant. Solubility equilibria are important in pharmaceutical, environmental and many other scenarios.
Definitions
A solubility equilibrium exists when a chemical compound in the solid state is in chemical equilibrium with a solution containing the compound. This type of equilibrium is an example of dynamic equilibrium in that some individual molecules migrate between the solid and solution phases such that the rates of dissolution and precipitation are equal to one another. When equilibrium is established, the solution is said to be saturated. The concentration of the solute in a saturated solution is known as the solubility. Units of solubility may be molar (mol dm−3) or expressed as mass per unit volume, such as μg mL−1. Solubility is temperature dependent. A solution containing a higher concentration of solute than the solubility is said to be supersaturated. A supersaturated solution may be induced to come to equilibrium by the addition of a "seed" which may be a tiny crystal of the solute, or a tiny solid particle, which initiates precipitation.
There are three main types of solubility equilibria.
Simple dissolution.
Dissolution with dissociation reaction. This is characteristic of salts. The equilibrium constant is known in this case as a solubility product.
Dissolution with ionization reaction. This is characteristic of the dissolution of weak acids or weak bases in aqueous media of varying pH.
In each case an equilibrium constant can be specified as a quotient of activities. This equilibrium constant is dimensionless as activity is a dimensionless quantity. However, use of activities is very inconvenient, so the equilibrium constant is usually divided by the quotient of activity coefficients, to become a quotient of concentrations. See Equilibrium chemistry#Equilibrium constant for details. Moreover, the activity of a solid is, by definition, equal to 1 so it is omitted from the defining expression.
For a chemical equilibrium
the solubility product, Ksp for the compound ApBq is defined as follows
where [A] and [B] are the concentrations of A and B in a saturated solution. A solubility product has a similar functionality to an equilibrium constant though formally Ksp has the dimension of (concentration)p+q.
Effects of conditions
Temperature effect
Solubility is sensitive to changes in temperature. For example, sugar is more soluble in hot water than cool water. It occurs because solubility products, like other types of equilibrium constants, are functions of temperature. In accordance with Le Chatelier's Principle, when the dissolution process is endothermic (heat is absorbed), solubility increases with rising temperature. This effect is the basis for the process of recrystallization, which can be used to purify a chemical compound. When dissolution is exothermic (heat is released) solubility decreases with rising temperature.
Sodium sulfate shows increasing solubility with temperature below about 32.4 °C, but a decreasing solubility at higher temperature. This is because the solid phase is the decahydrate () below the transition temperature, but a different hydrate above that temperature.
The dependence on temperature of solubility for an ideal solution (achieved for low solubility substances) is given by the following expression containing the enthalpy of melting ΔmH and mole fraction of the solute at saturation:
where is the partial molar enthalpy of the solute at infinite dilution and the enthalpy per mole of the pure crystal.
This differential expression for a non-electrolyte can be integrated on a temperature interval to give:
For nonideal solutions activity of the solute at saturation appears instead of mole fraction solubility in the derivative with respect to temperature:
Common-ion effect
The common-ion effect is the effect of decreasing the solubility of one salt, when another salt, which has an ion in common with it, is also present. For example, the solubility of silver chloride, AgCl, is lowered when sodium chloride, a source of the common ion chloride, is added to a suspension of AgCl in water.
The solubility, S, in the absence of a common ion can be calculated as follows. The concentrations [Ag+] and [Cl−] are equal because one mole of AgCl dissociates into one mole of Ag+ and one mole of Cl−. Let the concentration of [Ag+](aq) be denoted by x.
Ksp for AgCl is equal to at 25 °C, so the solubility is .
Now suppose that sodium chloride is also present, at a concentration of 0.01 mol dm−3. The solubility, ignoring any possible effect of the sodium ions, is now calculated by
This is a quadratic equation in x, which is also equal to the solubility.
In the case of silver chloride x2 is very much smaller than 0.01x, so this term can be ignored. Therefore
a considerable reduction from . In gravimetric analysis for silver, the reduction in solubility due to the common ion effect is used to ensure "complete" precipitation of AgCl.
Particle size effect
The thermodynamic solubility constant is defined for large monocrystals. Solubility will increase with decreasing size of solute particle (or droplet) because of the additional surface energy. This effect is generally small unless particles become very small, typically smaller than 1 μm. The effect of the particle size on solubility constant can be quantified as follows:
where *KA is the solubility constant for the solute particles with the molar surface area A, *KA→0 is the solubility constant for substance with molar surface area tending to zero (i.e., when the particles are large), γ is the surface tension of the solute particle in the solvent, Am is the molar surface area of the solute (in m2/mol), R is the universal gas constant, and T is the absolute temperature.
Salt effects
The salt effects (salting in and salting-out) refers to the fact that the presence of a salt which has no ion in common with the solute, has an effect on the ionic strength of the solution and hence on activity coefficients, so that the equilibrium constant, expressed as a concentration quotient, changes.
Phase effect
Equilibria are defined for specific crystal phases. Therefore, the solubility product is expected to be different depending on the phase of the solid. For example, aragonite and calcite will have different solubility products even though they have both the same chemical identity (calcium carbonate). Under any given conditions one phase will be thermodynamically more stable than the other; therefore, this phase will form when thermodynamic equilibrium is established. However, kinetic factors may favor the formation the unfavorable precipitate (e.g. aragonite), which is then said to be in a metastable state.
In pharmacology, the metastable state is sometimes referred to as amorphous state. Amorphous drugs have higher solubility than their crystalline counterparts due to the absence of long-distance interactions inherent in crystal lattice. Thus, it takes less energy to solvate the molecules in amorphous phase. The effect of amorphous phase on solubility is widely used to make drugs more soluble.
Pressure effect
For condensed phases (solids and liquids), the pressure dependence of solubility is typically weak and usually neglected in practice. Assuming an ideal solution, the dependence can be quantified as:
where is the mole fraction of the -th component in the solution, is the pressure, is the absolute temperature, is the partial molar volume of the th component in the solution, is the partial molar volume of the th component in the dissolving solid, and is the universal gas constant.
The pressure dependence of solubility does occasionally have practical significance. For example, precipitation fouling of oil fields and wells by calcium sulfate (which decreases its solubility with decreasing pressure) can result in decreased productivity with time.
Quantitative aspects
Simple dissolution
Dissolution of an organic solid can be described as an equilibrium between the substance in its solid and dissolved forms. For example, when sucrose (table sugar) forms a saturated solution
An equilibrium expression for this reaction can be written, as for any chemical reaction (products over reactants):
where Ko is called the thermodynamic solubility constant. The braces indicate activity. The activity of a pure solid is, by definition, unity. Therefore
The activity of a substance, A, in solution can be expressed as the product of the concentration, [A], and an activity coefficient, γ. When Ko is divided by γ, the solubility constant, Ks,
is obtained. This is equivalent to defining the standard state as the saturated solution so that the activity coefficient is equal to one. The solubility constant is a true constant only if the activity coefficient is not affected by the presence of any other solutes that may be present. The unit of the solubility constant is the same as the unit of the concentration of the solute. For sucrose K = at 25 °C. This shows that the solubility of sucrose at 25 °C is nearly 2 mol dm−3 (540 g/l). Sucrose is unusual in that it does not easily form a supersaturated solution at higher concentrations, as do most other carbohydrates.
Dissolution with dissociation
Ionic compounds normally dissociate into their constituent ions when they dissolve in water. For example, for silver chloride:
The expression for the equilibrium constant for this reaction is:
where is the thermodynamic equilibrium constant and braces indicate activity. The activity of a pure solid is, by definition, equal to one.
When the solubility of the salt is very low the activity coefficients of the ions in solution are nearly equal to one. By setting them to be actually equal to one this expression reduces to the solubility product expression:
For 2:2 and 3:3 salts, such as CaSO4 and FePO4, the general expression for the solubility product is the same as for a 1:1 electrolyte
(electrical charges are omitted in general expressions, for simplicity of notation)
With an unsymmetrical salt like Ca(OH)2 the solubility expression is given by
Since the concentration of hydroxide ions is twice the concentration of calcium ions this reduces to
In general, with the chemical equilibrium
and the following table, showing the relationship between the solubility of a compound and the value of its solubility product, can be derived.
{| class="wikitable"
!Salt ||p||q||Solubility, S
|-
!AgClCa(SO4)Fe(PO4)
| 1|| 1||
|-
!Na2(SO4)Ca(OH)2
| 21|| 12||
|-
!Na3(PO4)FeCl3
|31|| 13 ||
|-
!Al2(SO4)3Ca3(PO4)2
|23||32||
|-
!Mp(An)q
|p
|q
|
|}
Solubility products are often expressed in logarithmic form. Thus, for calcium sulfate, , . The smaller the value, or the more negative the log value, the lower the solubility.
Some salts are not fully dissociated in solution. Examples include MgSO4, famously discovered by Manfred Eigen to be present in seawater as both an inner sphere complex and an outer sphere complex. The solubility of such salts is calculated by the method outlined in dissolution with reaction.
Hydroxides
The solubility product for the hydroxide of a metal ion, Mn+, is usually defined, as follows:
However, general-purpose computer programs are designed to use hydrogen ion concentrations with the alternative definitions.
For hydroxides, solubility products are often given in a modified form, K*sp, using hydrogen ion concentration in place of hydroxide ion concentration. The two values are related by the self-ionization constant for water, Kw.
For example, at ambient temperature, for calcium hydroxide, Ca(OH)2, lg Ksp is ca. −5 and lg K*sp ≈ −5 + 2 × 14 ≈ 23.
Dissolution with reaction
A typical reaction with dissolution involves a weak base, B, dissolving in an acidic aqueous solution.
This reaction is very important for pharmaceutical products. Dissolution of weak acids in alkaline media is similarly important.
The uncharged molecule usually has lower solubility than the ionic form, so solubility depends on pH and the acid dissociation constant of the solute. The term "intrinsic solubility" is used to describe the solubility of the un-ionized form in the absence of acid or alkali.
Leaching of aluminium salts from rocks and soil by acid rain is another example of dissolution with reaction: alumino-silicates are bases which react with the acid to form soluble species, such as Al3+(aq).
Formation of a chemical complex may also change solubility. A well-known example, is the addition of a concentrated solution of ammonia to a suspension of silver chloride, in which dissolution is favoured by the formation of an ammine complex.
When sufficient ammonia is added to a suspension of silver chloride, the solid dissolves. The addition of water softeners to washing powders to inhibit the formation of soap scum provides an example of practical importance.
Experimental determination
The determination of solubility is fraught with difficulties. First and foremost is the difficulty in establishing that the system is in equilibrium at the chosen temperature. This is because both precipitation and dissolution reactions may be extremely slow. If the process is very slow solvent evaporation may be an issue. Supersaturation may occur. With very insoluble substances, the concentrations in solution are very low and difficult to determine. The methods used fall broadly into two categories, static and dynamic.
Static methods
In static methods a mixture is brought to equilibrium and the concentration of a species in the solution phase is determined by chemical analysis. This usually requires separation of the solid and solution phases. In order to do this the equilibration and separation should be performed in a thermostatted room. Very low concentrations can be measured if a radioactive tracer is incorporated in the solid phase.
A variation of the static method is to add a solution of the substance in a non-aqueous solvent, such as dimethyl sulfoxide, to an aqueous buffer mixture. Immediate precipitation may occur giving a cloudy mixture. The solubility measured for such a mixture is known as "kinetic solubility". The cloudiness is due to the fact that the precipitate particles are very small resulting in Tyndall scattering. In fact the particles are so small that the particle size effect comes into play and kinetic solubility is often greater than equilibrium solubility. Over time the cloudiness will disappear as the size of the crystallites increases, and eventually equilibrium will be reached in a process known as precipitate ageing.
Dynamic methods
Solubility values of organic acids, bases, and ampholytes of pharmaceutical interest may be obtained by a process called "Chasing equilibrium solubility". In this procedure, a quantity of substance is first dissolved at a pH where it exists predominantly in its ionized form and then a precipitate of the neutral (un-ionized) species is formed by changing the pH. Subsequently, the rate of change of pH due to precipitation or dissolution is monitored and strong acid and base titrant are added to adjust the pH to discover the equilibrium conditions when the two rates are equal. The advantage of this method is that it is relatively fast as the quantity of precipitate formed is quite small. However, the performance of the method may be affected by the formation supersaturated solutions.
See also
Solubility table: A table of solubilities of mostly inorganic salts at temperatures between 0 and 100 °C.
Solvent models
References
External links
Section 6.9: Solubilities of ionic salts. Includes a discussion of the thermodynamics of dissolution.
IUPAC–NIST solubility database
Solubility products of simple inorganic compounds
Solvent activity along a saturation line and solubility
Solubility challenge: Predict solubilities from a data base of 100 molecules. The database, of mostly compounds of pharmaceutical interest, is available at One hundred molecules with solubilities (Text file, tab separated).
A number of computer programs are available to do the calculations. They include:
CHEMEQL: A comprehensive computer program for the calculation of thermodynamic equilibrium concentrations of species in homogeneous and heterogeneous systems. Many geochemical applications.
JESS: All types of chemical equilibria can be modelled including protonation, complex formation, redox, solubility and adsorption interactions. Includes an extensive database.
MINEQL+: A chemical equilibrium modeling system for aqueous systems. Handles a wide range of pH, redox, solubility and sorption scenarios.
PHREEQC: USGS software designed to perform a wide variety of low-temperature aqueous geochemical calculations, including reactive transport in one dimension.
MINTEQ: A chemical equilibrium model for the calculation of metal speciation, solubility equilibria etc. for natural waters.
WinSGW: A Windows version of the SOLGASWATER computer program.
Equilibrium chemistry
Solutions |
66560053 | https://en.wikipedia.org/wiki/CalyxOS | CalyxOS | CalyxOS is an operating system for smartphones based on Android with mostly free and open-source software. It is produced by the Calyx Institute as part of its mission to make privacy more accessible and easier to use.
Software
CalyxOS includes features and options not available in the official firmware distributed by most mobile device vendors. These features include phone dialer integration with encrypted calling applications such as Signal, integration of Tor, and the inclusion of free VPN services run by The Calyx Institute and other non-profits such as Riseup. The default search is set to DuckDuckGo and the default web browser in CalyxOS is DuckDuckGo's Android browser. CalyxOS also includes MicroG as a privacy enhanced replacement for some of the functionality in Google Play Services.
CalyxOS has also led the development of SeedVault an encrypted backup and restore application for integration into Android based Operating systems and which has been adopted in LineageOS, GrapheneOS and others.
The operating system aims to preserve the Android security model by default, taking full advantage of Android's Verified Boot system of cryptographic signing of the operating system and running with a locked boot loader.
Reception
In October 2020, Moritz Tremmel reviewed CalyxOS. A month later, Tremmel explained why he preferred CalyxOS over LineageOS. A year later in September 2021, Tremmel further explained how CalyxOS was different than other ROMs because it did not require as much "fiddling". Rahul Nambiampurath, writing for MakeUseOf in March 2021, termed CalyxOS, "[one of the] best [Android] ROMs for privacy ... offers the perfect middle ground between convenience and privacy". In August 2021, Android Authority wrote CalyxOS "puts privacy and security into the hands of everyday users."
References
See also
Comparison of mobile operating systems
List of custom Android distributions
Security-focused operating system
Guardian Project
Operating systems
Free and open-source Android software
Custom Android firmware |
24226726 | https://en.wikipedia.org/wiki/Starlight%20Information%20Visualization%20System | Starlight Information Visualization System | Starlight is a software product originally developed at Pacific Northwest National Laboratory and now by Future Point Systems. It is an advanced visual analysis environment. In addition to using information visualization to show the importance of individual pieces of data by showing how they relate to one another, it also contains a small suite of tools useful for collaboration and data sharing, as well as data conversion, processing, augmentation and loading.
The software, originally developed for the intelligence community, allows users to load data from XML files, databases, RSS feeds, web services, HTML files, Microsoft Word, PowerPoint, Excel, CSV, Adobe PDF, TXT files, etc. and analyze it with a variety of visualizations and tools. The system integrates structured, unstructured, geospatial, and multimedia data, offering comparisons of information at multiple levels of abstraction, simultaneously and in near real-time. In addition Starlight allows users to build their own named entity-extractors using a combination of algorithms, targeted normalization lists and regular expressions in the Starlight Data Engineer (SDE).
As an example, Starlight might be used to look for correlations in a database containing records about chemical spills. An analyst could begin by grouping records according to the cause of the spill to reveal general trends. Sorting the data a second time, they could apply different colors based on related details such as the company responsible, age of equipment or geographic location. Maps and photographs could be integrated into the display, making it even easier to recognize connections among multiple variables.
Starlight has been deployed to both the Iraq and Afghanistan wars and used on a number of large-scale projects.
PNNL began developing Starlight in the mid-1990s, with funding from the Land Information Warfare Agency, a part of the Army Intelligence and Security Command and continued developed at the laboratory with funding from the NSA and the CIA. Starlight integrates visual representations of reports, radio transcripts, radar signals, maps and other information. The software system was recently honored with an R&D 100 Award for technical innovation.
In 2006 Future Point Systems, a Silicon Valley startup, acquired rights to jointly develop and distribute the Starlight product in cooperation with the Pacific Northwest National Laboratory.
The software is now also used outside of the military/intelligence communities in a number of commercial environments.
References
Further reading
Inside Energy With Federal Lands. (March 10, 2003) PNNL offers info-system license. Volume 20; Issue 45; Page 2
Research & Development. (September 1, 2003) Software Offers 3-D Data Management. Section: Special; Volume 45; Issue 9; Page 58.
R&D Management. (September 1, 2003) 11 innovative, award winning technologies. Volume 45; Issue 9; Page 18.
Kritzstein, Brian. (December 10, 2003) Military Geospatial Technology. Starlight, the leading edge of an emerging class of information systems that couples advanced information modeling and management techniques within a visual interface, links data and information top points on a map. Volume: 1 Issue: 1
Commercial terrain visualization software product information. (2003) Pacific Northwest National Laboratory (PNNL) ; Starlight.
Reid, Hal. (March 8, 2005) Directions Magazine. Starlight Overview and Interview with Battelle's Brian Kritzstein.
St. John, Jeff. (February 16, 2006) Tri-City Herald PNNL earns 4 technology awards.
Ascribe Newswire. (February 16, 2006) Pacific Northwest National Laboratory Recognized for Commercializing Technology.
External links
Starlight PNNL website
Starlight Official Website
Computational science
Computer graphics
Infographics
Scientific visualization
Data visualization software |
613351 | https://en.wikipedia.org/wiki/List%20of%20computing%20people | List of computing people | This is a list of people who are important or notable in the field of computing, but who are not primarily computer scientists or programmers.
A
Alfred Aho, co-developer of the AWK
Leonard Adleman, encryption (RSA)
Marc Andreessen, co-founder of Netscape Communications Corporation
B
Tim Berners-Lee, inventor of the World Wide Web
Stephen Bourne, developer of the Bourne shell
C
John Carmack, realtime computer game graphics, id Software
Noam Chomsky, linguist, language theorist (Chomsky hierarchy) and social critic
D
Theo de Raadt, founder of the OpenBSD and OpenSSH projects
E
J. Presper Eckert, ENIAC
Larry Ellison, co-founder of Oracle Corporation
Marc Ewing, creator of Red Hat Linux
F
G
Bill Gates, co-founder and Chairman of Microsoft
James Gosling, "father" of the Java programming language
H
Grace Hopper, she was a pioneer of computer programming who invented one of the first linkers.
I
Jonathan Ive, Senior Vice President of Industrial Design at Apple
J
Steve Jobs, co-founder and CEO of Apple
Bill Joy, co-founder of Sun Microsystems, BSD
K
Brian Kernighan, Dennis Ritchie, programming language C
Donald Knuth, The Art of Computer Programming, TeX
L
Rasmus Lerdorf, creator of the PHP Scripting Language
Lawrence Lessig, professor of law and founder of the Creative Commons
Ada Lovelace
M
John William Mauchly, ENIAC
John McCarthy, LISP programming language
Bob Miner, co-founder of Oracle Corporation
Marvin Minsky, AI luminary
Gordon E. Moore, co-founder of Intel, Moore's Law
N
Roger Needham
John von Neumann, theoretical computer science
Robert Noyce, co-founder of Intel and the founder of integrated circuit
P
Sir John Anthony Pople, pioneer in computational chemistry
Jon Postel, Internet pioneer, founder of IANA
Q
R
Eric Raymond, Open Source movement luminary
Dennis Ritchie
Ron Rivest, encryption (RSA)
Guido van Rossum, Python (programming language) Benevolent Dictator For Life
S
Adi Shamir, encryption (RSA)
Mark Shuttleworth, founder of Canonical
Richard Stallman, founder of GNU
Olaf Storaasli, NASA Finite element machine
Bjarne Stroustrup, founder of C++
T
Linus Torvalds, Linux
Alan Turing, British mathematician and cryptographer
U
V
W
Prof. Joseph Weizenbaum, computer critic
Kevin Warwick, cyborg scientist, implant self-experimenter
Niklaus Wirth, developed Pascal
Peter J. Weinberger, co-developer of the AWK language
Sophie Wilson, designer of the ARM instruction set
Stephen Wolfram, founder of Wolfram Research, physicist, software developer, mathematician
Steve Wozniak, co-founder of Apple; creator of the Apple I and Apple II computers
X
Y
Z
Jill Zimmerman, James M. Beall Professor of Mathematics and Computer Science at Goucher College
Konrad Zuse, built one of the first computers
See also
List of programmers
List of computer scientists
List of pioneers in computer science
List of Russian IT developers
Computing
Computing people |
62074417 | https://en.wikipedia.org/wiki/Legends%20of%20Runeterra | Legends of Runeterra | Legends of Runeterra (LoR is a 2020 digital collectible card game developed and published by Riot Games. Inspired by the physical collectible card game Magic: The Gathering, the developers sought to create a game within the same genre that significantly lowered the barrier to entry. Since its release in April 2020, the game has been free-to-play, and is monetised through purchasable cosmetics. The game is available for Microsoft Windows and mobile operating systems iOS and Android.
Like other collectible card games, players play one versus one to reduce their opponent's health to zero. Cards come in a variety of types and belong to one of ten regions—groups of cards with a similar gameplay identity. One significant feature is the game's combat pacing; unlike in other collectible card games, each player alternates between attacking and defending every turn.
Many characters from League of Legends, a multiplayer online battle arena by Riot Games, feature in the game. The fictional universe of Runeterra, released by the developer through short stories, comic books, and an animated series, provides flavor and theming for the game's cards.
Legends of Runeterra has been well received by critics, who point to its generous progression systems, accessible gameplay, and high-quality visuals, and has won several industry awards.
Gameplay
Legends of Runeterra is a digital collectible card game played one versus one. At the beginning, both players' Nexus has 20 health points; the first to fall to zero loses. Players begin each match with a hand of four cards, which they may trade away for another random card from their deck. Each round, both players draw one card. Cards are played by spending mana; players begin with zero mana, and gain one additional mana crystal per round up to a maximum of ten. A maximum of three unspent mana is stored automatically at the end of a round as spell mana; this can be used in future rounds to cast spells but cannot summon unit cards.
One of the game's distinguishing features is its combat pacing. Each round, the "attack token", a symbol which indicates which player may attack and who will defend, alternates from player to player. This is reflected visually on each players' half of the board, with a sword icon representing attack or a shield for defense. Some cards enable players to attack when they do not have the attack token.
Cards
Each card in the game belongs to a region; in standard play, one deck can use cards from up to two regions. Regions have a distinct style of play and identity. Unlike other trading card games, there are no neutral cards that can be used in every deck. The regions originated in the wider League of Legends expanded universe. Upon the game's initial release, there were three types of card: champions, followers, and spells. Champion cards are the playable characters from League of Legends. These cards are unique within the game because they can level up. Levelling a champion transforms the card—and all copies of it in the player's deck—into a more powerful version of the card. Unit cards, which includes champions and non-champions (followers), have a number representing their attack and health statistic; attack is how much damage a unit deals to either the Nexus or its blocker, while health reflects the maximum damage a card can take before being removed from play.
Spell cards have a "speed", denoting when they can be played and in what way the opponent is able to respond, if at all. At launch, there were three speeds: slow, fast, and burst. Slow-speed spells cannot be played during active combat, pass priority over to the opponent, and can be responded to with fast or burst spells; fast spells can be played during combat and do not pass priority; and burst speed spells resolve their effect instantly with no opportunity for opponent response. A fourth speed, Focus, resolves immediately and does not pass over turn priority, but can only be used outside of combat. Unit cards do not have a speed, but end a player's turn within a round.
Another card type was added in the Monuments of Power expansion—landmarks. Landmarks are played with regular unit mana and consume a position on the player's board; they cannot block or attack. Some landmarks have a "countdown" mechanic, wherein they cause a set effect after a certain number of rounds.
Development and release
Riot Games employees have considered making a card game since early in the company's history. The company has a significant number of fans of the collectible card game genre. Legends of Runeterra's balancing director, Steve Rubin, pointed to Jeff Jew, the game's executive producer and an early Riot Games employee, and Andrew Yip, as big fans of Magic: The Gathering. There were several different concepts of the game, but Legends of Runeterra was primarily developed over three years beginning in 2017. Riot recruited professional Magic competitors as early playtesters; of them, Steve Rubin was invited to return permanently and later moved into the design team. Rubin noted that the announcement of Artifact caused the developers to consider rushing the game's release, but ultimately decided to polish the game and aim for wider demographics.
A significant challenge in development was determining the mechanics of card acquisition; an early iteration in which players simply unlocked region combinations was poorly received by playtesters, who missed the satisfaction of collecting all cards. Accessibility was a priority for the developers, who sought to provide a familiar experience while not forcing players to buy booster packs, a random bundle of cards otherwise common in the CCG genre. The developers placed a limit on how many cards could be bought in exchange for real money each week. Instead, players are given a number of random cards each week that scales with how frequently they play, and a mechanic called Wild Cards, a way for players to directly craft desired cards. Jeff Jew said that frictionless card collection for players enables the developers to balance more responsively, as players would not be upset that a deck they spent up to building had been weakened.
Release and sets
Legends of Runeterra was revealed at Riot Games' celebration event of the tenth anniversary of League of Legends on October 15, 2019; applications for the closed beta period began following the conclusion of the stream. Eurogamer observed the unusual timing of the reveal, given the recent failure of Valve's Artifact and the waning audience for Blizzard Entertainment's Hearthstone. The first closed beta period ended in October 2019. A second provided access to an additional mode called Expeditions from November 14–19, 2019. The open beta, giving access to all players, commenced on January 24, 2020; unlike in the closed beta period, cards and cosmetics purchased in the open beta carried over to the live release of the game.
The game was released on April 29, 2020; although the beta period was limited to Windows users, the launch accompanied the game's release on mobile operating systems iOS and Android. During beta, the game had included six regions, with four champion cards per region, and 294 total cards. The official launch also brought a new set to the game, Rising Tides, introducing 120 new cards and a new region—Bilgewater. Along with new cards, sets contain new game mechanics and further development to existing ones. Every existing region was given an additional champion, with Bilgewater having six. With the game's second set, Call of the Mountain, Riot Games altered the release schedule, with each set spanning three "expansions". Call of the Mountain introduced the region of Mount Targon and was released for PC and mobile devices on August 27, 2020. The region of Shurima became part of the game with the Empires of the Ascended set, released on March 3, 2021. The tenth and final region of the game, Bandle City, was released on August 25, 2021, and will bring four expansions instead of the usual three.
Reception
Legends of Runeterra received positive reviews from critics. According to review aggregator Metacritic, the game has a weighted average of 87/100.
Many outlets highlighted that the game was both accessible for newcomers to the genre while preserving its depth. IGN's Cam Shea awarded the game a 9/10, noting that it managed to maintain its complexity while also streamlining elements from other collectible card games, such as Magic: The Gathering. Jason Coles of NME wrote that it "may well be the most accessible card game out there".
Also of note was the game's generous free-to-play business model, especially in relation to other games in the same genre. Giving the game an 85/100, Steven Messner, writing for PC Gamer noted the absence of "booster packs", bundles of cards purchasable with real currency, having been replaced with a generous battle pass system which gives out an abundance of free cards and crafting material every week. Messner also mentioned the ease of achieving the maximum level of the battle pass every week.
Awards
The game was nominated for Best Mobile Game at The Game Awards 2020. Apple named it the iPad Game of the Year for 2020. It also won the Mobile Game of the Year award at the 24th Annual D.I.C.E. Awards in 2021.
References
Notes
Citations
External links
League of Legends
Digital collectible card games
Card battle video games
Multiplayer online games
Free-to-play video games
Windows games
IOS games
Android (operating system) games
Video games developed in the United States
2020 video games
Riot Games games |
410411 | https://en.wikipedia.org/wiki/Fanshawe%20College | Fanshawe College | Fanshawe College of Applied Arts and Technology, commonly shortened to Fanshawe College, is a public college in Southwestern Ontario, Canada. One of the largest colleges in Canada, it has campuses in London, Simcoe, St. Thomas and Woodstock with additional locations in Southwestern Ontario. Fanshawe has approximately 43,000 students and provides over 200 higher education programs.
History
In 1962, the Ontario Vocational Centre (OVC) was founded in London, Ontario, and held its first classes on September 28, 1964. In 1967, it became Fanshawe College, part of a provincial system of applied arts and technology colleges. Fanshawe subsequently established campuses in Woodstock, St. Thomas, and Simcoe. The London campus originally consisted of three buildings, but has since been subject to a series of extensions. The college's name has old English origins, combining words fane (meaning temple or building) and shaw or shawe (meaning woods) to mean "temple in the woods".
James A. Colvin was named Fanshawe College's first president in 1967 and held the position until 1979, when he was succeeded by Harry Rawson, who served as president until his retirement in 1987. Barry Moore was the third president from 1987 to 1996. Howard Rundle, Fanshawe's longest-serving president, subsequently led the college for 18 years until his retirement on August 31, 2013. Peter Devlin became president of the college on September 3, 2013, and previously served as a lieutenant general in the Canadian Army.
In 2018, Fanshawe established its fifth school, the School of Digital and Performing Arts, offering creative programs previously offered by the School of Contemporary Media and School of Design.
130 Dundas Street opened in September 2018. The new building is home to 1,600 students from the School of Information Technology and the School of Tourism, Hospitality and Culinary Arts.
On April 27, 2015, the family of the late Don Smith, the co-founder of EllisDon, announced that the School of Building Technology would be renamed the Donald J. Smith School of Building Technology in his honor. Don was the first recipient of a Fanshawe College honorary diploma in 1992. In 2008, Fanshawe presented his wife, Joan, with an honorary diploma.
In 2014, Fanshawe announced that it would purchase the building of the recently closed Kingsmill's Department Store for expansion of its downtown London campus with a request for an additional grant of $10 million from City Council. The request proved politically contentious in an municipal election year with it being initially refused by Council following a tie vote on July 29. However, after the local organization, Downtown London, put up $1 million in support of this initiative, London City Council narrowly voted to approve the remainder of the funding after minor additional contract changes in its favor.
On April 2, 2014, Fanshawe College unveiled its new visual identity and brand promise. Fanshawe president Peter Devlin stated that the new brand "focusses on Fanshawe's desire to help students reach their full potential." The rebranding process took place during the summer of 2013 where the input of over 6000 current students, staff, alumni, guidance counsellors, business and academic leaders, government and community partners was used to determine the new brand. The college named its new logo NorthStar because of its visual and symbolic link to the star famous for helping generations of travelers find their way. In an online survey, NorthStar was preferred two to one over all other concepts indicated in surveys, including the then current logo.
In May 2011, the college opened its Centre for Applied Transportation Technologies, with a capacity of 1,500 students. In September 2014, Fanshawe College established its School of Public Safety, to provide public safety programs. The school received premises in September 2016. In June 2016, Fanshawe opened its Canadian Centre for Product Validation (CCPV), a testing facility. The college established the Norton Wolf School of Aviation Technology after purchasing Jazz Aviation facilities at London International Airport in August 2013.
The Fanshawe College Arboretum was established in 1995.
Programs
Fanshawe offers more than 200 degree, diploma, certificate and apprenticeship programs to 43,000 students each year.
The College has 15 academic schools: Donald J. Smith School of Building Technology; Lawrence Kinlin School of Business; Norton Wolf School of Aviation Technology; School of Applied Science and Technology; School of Community Studies; School of Contemporary Media; School of Design; School of Digital and Performing Arts; School of Health Sciences; School of Information Technology; School of Language and Liberal Studies; School of Nursing; School of Public Safety; School of Tourism, Hospitality and Culinary Arts; and School of Transportation Technology and Apprenticeship.
Athletics
Fanshawe College joined the Ontario College Athletic Association (OCAA) in 1967 as one of the six founding members. The Falcons currently compete in 14 varsity sports, with 19 teams including: men's and women's basketball, men's and women's volleyball, men's and women's indoor and outdoor soccer, men's and women's golf, men's and women's badminton, men's and women's cross-country, men's baseball, women's softball and men's and women's and mixed curling.
Many of Fanshawe's varsity programs excel not only in the OCAA but also the Canadian Colleges Athletic Association (CCAA). As of 2020, the Falcons have a total of 21 national championships, 147 provincial championships and a total of 417 medals.
In 2019/20, Fanshawe won 15 total OCAA medals and three CCAA national medals, including five provincial championships and a national gold medal in Men’s Cross Country.
In 2019, Fanshawe Athletics set new Fanshawe record totals for most medals in a season. Falcons teams concluded the 2018/19 season with an astounding 28 overall medals, a 40 per cent increase over last season's record total of 20. The Falcons led the Ontario Colleges Athletic Association (OCAA), winning 11 OCAA Championships this season to go along with 21 OCAA medals. The 11 championships shattered Fanshawe's own record of six from 2017/18. Fanshawe Athletics also set a new school record for most national medals in a single season (7). The 2018–19 season saw Fanshawe win two Canadian Collegiate Athletic Association (CCAA) National Championships, 5 national bronze, 11 provincial gold, 6 provincial silver and 4 provincial bronze medals.
Additionally, Fanshawe has one of the largest campus recreation programs in Ontario with over 4500 students participating in intramurals, extramurals and open recreation every year.
Campuses
London Campus
Fanshawe's campus in London, Ontario, Canada covers and has twenty-three buildings, including nearly 1200 apartment-style residence rooms and close to 400 townhouse rooms at its London campus. The London Campus also includes the School of Transportation Technology and Apprenticeship and the Norton Wolf School of Aviation Technology. The London campus has been described as "one of the largest in Ontario" and as a "city within a city".
The campus also has access to the following bus routes provided by the London Transit Commission: 4A, 4B, 17, 17A, 17B, 20, 25, 27, 36, 91 and 104.
London Downtown Campus
Fanshawe's London Downtown Campus was established in 2018. It has three buildings, located at 431 Richmond Street (Access Studies), 130 Dundas Street (Schools of Information Technology and Tourism, Hospitality and Culinary Arts) and 137 Dundas Street (School of Digital and Performing Arts).
London South Campus
Fanshawe's newest campus, London South, is located at 1060 Wellington Rd. South. The newly renovated building opened in September 2019 and hosts five programs currently, Business Management, Business and Information Systems Architecture, Agri-Business Management, Health Care Administration Management and Retirement Residence Management. The campus was formerly a Westervelt College campus, which closed in 2017.
St. Thomas/Elgin Regional Campus
The St. Thomas/Elgin Regional Campus, located in the southeast end of St. Thomas, Ontario, is home to approximately 350 full-time students and 2,000 part-time students. The Campus offers certificate and diploma programs, academic upgrading, apprenticeships, continuing education, corporate training, and career and employment services.
Simcoe/Norfolk Regional Campus
The Simcoe/Norfolk Regional Campus, located in a part of Ontario known for its rural charm and strong agricultural base, is home to almost 200 full-time students and hundreds more part-time students. The Campus offers certificate, diploma and graduate certificate programs, academic upgrading, continuing education, corporate training and career and employment services. Full-time programs that are unique to this campus are Adventure Expeditions and Interpretive Leadership, Developmental Services Worker (Accelerated) and Early Childhood Education (Accelerated). It was the first Fanshawe campus to offer the Agri-Business Management graduate certificate program.
Woodstock/Oxford Regional Campus
The Woodstock/Oxford Regional Campus, conveniently located at the forks of Highways 401 and 403, is home to approximately 450 full-time students and 2,000 part-time students. The Campus offers certificate and diploma programs, apprenticeships, academic upgrading, continuing education, corporate training and more. Full-time programs that are unique to this campus are Business – Entrepreneurship and Management, Hair Stylist, Police Foundations (Accelerated) and Heating, Refrigeration and Air Conditioning Technician.
Huron/Bruce Regional Sites
Fanshawe has been in the central Huron/Bruce area, north of London, since approximately 2007. Currently programs are held at the Bruce Technology Skills and Training Centre.
Student government
The Fanshawe Student Union (FSU) is a student representative body, designed to meet the various needs and expectations of students attending Fanshawe College. The FSU has had a student newspaper since its inception, first known as Fanfare, changing to The Dam in 1971. It has been known as The Interrobang since approximately 1979 and is Fanshawe's only student newspaper. It is published weekly from September to April and distributed on-campus free of charge throughout Fanshawe College. The Interrobang, is a member of Canadian University Press.
Notable alumni
David Willsie, Paralympic athlete
Damian Warner, Gold medal decathlete in 2020 Tokyo Games and bronze medal winner in Athletics at the 2016 Olympics in Rio de Janeiro
Caroline Cameron, television sportscaster
Les Stroud, musician, filmmaker, and survival expert best known for TV series Survivorman
Brad Long, chef
Cheryl Hickey, host of ET Canada
Trevor Morris, orchestral composer and music producer
Paul Haggis, screenwriter, producer and director
Emm Gryner, singer-songwriter and actress
Steven Sabados, television show host, interior designer and writer
Kelley Armstrong, writer
Anne Marie DeCicco-Best, longest-serving mayor of London, Ontario
Carol Mitchell, politician
Greg Brady, radio and sports broadcaster
William Peter Randall, musician and politician
Nathan Robitaille, sound editor
Ted Roop, radio and media personality
Bruce Smith, Ontario politician
Sam Stout, retired professional Mixed Martial Artist formerly with the UFC
Glenn Thibeault, politician
Maria Van Bommel, Ontario politician
Jeff Willmore, artist
Craig Mann – Oscar-winning re–recording mixer
Dana Lewis – award-winning television news correspondent
Notable faculty
Gerald Fagan – choral conductor, honorary diploma recipient, former faculty member and Member of the Order of Canada.
Jack Richardson – legendary record producer for Canadian rockers The Guess Who (1969–1975) and many others artists such as Bob Seger and Alice Cooper. Richardson was appointed to the Order of Canada in 2003.
Moe Berg – Canadian singer-songwriter and record producer, former lead singer of the band The Pursuit of Happiness, and now professor in the Music Industry Arts program.
Dan Brodbeck – Canadian record producer, recording engineer/mixer and professor in the Music Industry Arts program. Brodbeck was nominated for a 2020 Grammy Award in the Best Rock Album category for his work on the Irish alternative rock group's The Cranberries' final album, In The End.
See also
Higher education in Ontario
List of colleges in Ontario
References
External links
Educational institutions established in 1967
1967 establishments in Ontario |
8736036 | https://en.wikipedia.org/wiki/Outline%20of%20the%20Internet | Outline of the Internet | The following outline is provided as an overview of and topical guide to the Internet.
Internet – worldwide, publicly accessible network of interconnected computer networks that transmit data by packet switching using the standard Internet Protocol (IP). It is a "network of networks" that consists of millions of interconnected smaller domestic, academic, business, and government networks, which together carry various information and services, such as electronic mail, online chat, file transfer, and the interlinked Web pages and other documents of the World Wide Web.
Internet features
Hosting –
File hosting –
Web hosting
E-mail hosting
DNS hosting
Game servers
Wiki farms
World Wide Web –
Websites –
Web applications –
Webmail –
Online shopping –
Online auctions –
Webcomics –
Wikis –
Voice over IP
IPTV
Internet communication technology
Internet infrastructure
Critical Internet infrastructure –
Internet access –
Internet access in the United States –
Internet service provider –
Internet backbone –
Internet exchange point (IXP) –
Internet standard –
Request for Comments (RFC) –
Internet communication protocols
Internet protocol suite –
Link layer
Link layer –
Address Resolution Protocol (ARP/InARP) –
Neighbor Discovery Protocol (NDP) –
Open Shortest Path First (OSPF) –
Tunneling protocol (Tunnels) –
Layer 2 Tunneling Protocol (L2TP) –
Point-to-Point Protocol (PPP) –
Media Access Control –
Ethernet –
Digital subscriber line (DSL) –
Integrated Services Digital Network (ISDN) –
Fiber Distributed Data Interface (FDDI) –
Internet layer
Internet layer –
Internet Protocol (IP) –
IPv4 –
IPv6 –
Internet Control Message Protocol (ICMP) –
ICMPv6 –
Internet Group Management Protocol (IGMP) –
IPsec –
Transport layer
Transport layer –
Transmission Control Protocol (TCP) –
User Datagram Protocol (UDP) –
Datagram Congestion Control Protocol (DCCP) –
Stream Control Transmission Protocol (SCTP) –
Resource reservation protocol (RSVP) –
Explicit Congestion Notification (ECN) –
Application layer
Application layer –
Border Gateway Protocol (BGP) –
Dynamic Host Configuration Protocol (DHCP) –
Domain Name System (DNS) –
File Transfer Protocol (FTP) –
Hypertext Transfer Protocol (HTTP) –
Internet Message Access Protocol (IMAP) –
Internet Relay Chat (IRC) –
LDAP –
Media Gateway Control Protocol (MGCP) –
Network News Transfer Protocol (NNTP) –
Network Time Protocol (NTP) –
Post Office Protocol (POP) –
Routing Information Protocol (RIP) –
Remote procedure call (RPC) –
Real-time Transport Protocol (RTP) –
Session Initiation Protocol (SIP) –
Simple Mail Transfer Protocol (SMTP) –
Simple Network Management Protocol (SNMP) –
SOCKS –
Secure Shell (SSH) –
Telnet –
Transport Layer Security (TLS/SSL) –
Extensible Messaging and Presence Protocol (XMPP) –
History of the Internet
History of the Internet
The internet wasn't invented but continually developed by internet pioneers.
Predecessors
NPL network – a local area computer network operated by a team from the National Physical Laboratory in England that pioneered the concept of packet switching.
ARPANET – an early packet switching network and the first network to implement the protocol suite TCP/IP which later became a technical foundation of the Internet.
Merit Network – a computer network created in 1966 to connect the mainframe computers at universities that is currently the oldest running regional computer network in the United States.
CYCLADES – a French research network created in the early 1970s that pioneered the concept of packet switching, and was developed to explore alternatives to the ARPANET design.
Computer Science Network (CSNET) – a computer network created in the United States for computer science departments at academic and research institutions that could not be directly connected to ARPANET, due to funding or authorization limitations. It played a significant role in spreading awareness of, and access to, national networking and was a major milestone on the path to development of the global Internet.
National Science Foundation Network (NSFNET) –
History of Internet components
History of packet switching –
very high speed Backbone Network Service (vBNS) –
Network access point (NAP) –
Federal Internet Exchange (FIX) –
Commercial Internet eXchange (CIX) –
Timeline of Internet conflicts
Internet usage
Global Internet usage
Internet traffic
List of countries by number of Internet users
List of sovereign states in Europe by number of Internet users
List of countries by number of broadband Internet subscriptions
List of countries by number of Internet hosts
Languages used on the Internet
List of countries by IPv4 address allocation
Internet Census of 2012
Internet politics
Internet privacy – a subset of data privacy concerning the right to privacy from third parties including corporations and governments on the Internet.
Censorship – the suppression of speech, public communication, or other information, on the basis that such material is considered objectionable, harmful, sensitive, politically incorrect or "inconvenient" as determined by government authorities or by community consensus.
Censorship by country – the extent of censorship varies between countries and sometimes includes restrictions to freedom of the Press, freedom of speech, and human rights.
Internet censorship – the control or suppression of what can be accessed, published, or viewed on the Internet enacted by regulators or self-censorship.
Content control software – a type of software that restricts or controls the content an Internet user is capable to access.
Internet censorship and surveillance by country
Internet censorship circumvention – the use of techniques and processes to bypass filtering and censored online materials.
Internet law – law governing the Internet, including dissemination of information and software, information security, electronic commerce, intellectual property in computing, privacy, and freedom of expression.
Internet organizations
Domain name registry or Network Information Center (NIC) – a database of all domain names and the associated registrant information in the top level domains of the Domain Name System of the Internet that allow third party entities to request administrative control of a domain name.
Private sub-domain registry – an NIC which allocates domain names in a subset of the Domain Name System under a domain registered with an ICANN-accredited or ccTLD registry.
Internet Society (ISOC) – an American non-profit organization founded in 1992 to provide leadership in Internet-related standards, education, access, and policy.
InterNIC (historical) – the organization primarily responsible for Domain Name System (DNS) domain name allocations until 2011 when it was replaced by ICANN.
Internet Corporation for Assigned Names and Numbers (ICANN) – a nonprofit organization responsible for coordinating the maintenance and procedures of several databases related to the namespaces of the Internet, ensuring the network's stable and secure operation.
Internet Assigned Numbers Authority (IANA) – a department of ICANN which allocates domain names and maintains IP addresses.
Internet Activities Board (IAB) –
Internet Engineering Task Force (IETF) –
Non-profit Internet organizations
Advanced Network and Services (ANS) (historical) –
Internet2 –
Merit Network –
North American Network Operators' Group (NANOG) –
Commercial Internet organizations
Amazon.com –
ANS CO+RE (historical) –
Google – an American multinational technology company that specializes in Internet-related services and products, which include online advertising technologies, search engine, cloud computing, software, and hardware.
Cultural and societal implications of the Internet
Sociology – the scientific study of society, including patterns of social relationships, social interaction, and culture.
Sociology of the Internet – the application of sociological theory and methods to the Internet, including analysis of online communities, virtual worlds, and organizational and social change catalyzed through the Internet.
Digital sociology – a sub-discipline of sociology that focuses on understanding the use of digital media as part of everyday life, and how these various technologies contribute to patterns of human behavior, social relationships and concepts of the self.
Internet culture
List of web awards
Underlying technology
MOSFET (MOS transistor)
CMOS (complementary MOS)
LDMOS (lateral diffused MOS)
Power MOSFET
RF CMOS (radio frequency CMOS)
Optical networking
Fiber-optic communication
Laser
Optical fiber
Telecommunications network
Modem
Telecommunication circuit
Wireless network
Base station
Cellular network
RF power amplifier
Router
Transceiver
See also
Outline of information technology
External links
"10 Years that changed the world"—WiReD looks back at the evolution of the Internet over last 10 years
Berkman Center for Internet and Society at Harvard
A comprehensive history with people, concepts and quotations
CBC Digital Archives—Inventing the Internet Age
How the Internet Came to Be
Internet Explained
Global Internet Traffic Report
The Internet Society History Page
RFC 801, planning the TCP/IP switchover
Archive CBC Video from 1993 about the Internet
"The beginners guide to the internet."
"Warriors of the net - A movie about the internet."
"The Structure of the Internet."
Internet
Internet |
26069127 | https://en.wikipedia.org/wiki/Veracode | Veracode | Veracode is an application security company based in Burlington, Massachusetts. Founded in 2006, the company provides SaaS application security that integrates application analysis into development pipelines. Veracode provides multiple security analysis technologies on a single platform, including static analysis (or white-box testing), dynamic analysis (or black-box testing), and software composition analysis. The company serves over 2,500 customers worldwide and, as of February 2021, has assessed over 25 trillion lines of code.
History
Veracode was founded by Chris Wysopal and Christien Rioux, former engineers from @stake, a Cambridge, Massachusetts-based security consulting firm known for employing former “white hat” hackers from L0pht Heavy Industries. Much of Veracode's software was written by Rioux. In 2007, the company launched SecurityReview, a service which can be used to test code in order to find vulnerabilities that could lead to cybersecurity breaches or hacking. The service is intended to be used as an alternative to penetration testing, which involves hiring a security consultant to hack into a system. On November 29, 2011, the company announced that it had appointed Robert T. Brennan, former CEO of Iron Mountain Incorporated, as its new chief executive officer.
As of 2014, Veracode's customers included three of the top four banks in the Fortune 100. Fortune reported in March 2015 that Veracode planned to file for an initial public offering (IPO) later that year in order to go public. However, the IPO did not occur. In a funding round announced in September 2014, the firm raised in a late-stage investment led by Wellington Management Company with participation from existing investors.
In the company's annual cybersecurity report for 2015, it was found that most sectors failed industry-standard security tests of their web and mobile applications and that government is the worst performing sector in regards to fixing security vulnerabilities. This annual report also found that "four out of five applications written in popular web scripting languages contain at least one of the critical risks in an industry-standard security benchmark."
On March 9, 2017, CA Technologies announced it was acquiring Veracode for approximately $614 million in cash, and the acquisition was completed on April 3, 2017.
On July 11, 2018, Broadcom announced that it was acquiring Veracode parent CA Technologies for $18.9 billion in cash. The acquisition was completed on November 5, 2018, and Broadcom thus became the new owner of the Veracode business. On the same day, Thoma Bravo, a private equity firm headquartered in San Francisco, California, announced that it had agreed to acquire Veracode from Broadcom for $950 million cash.
In 2019, Sam King became the CEO.
Veracode’s 2020 annual cybersecurity report found that half of application security flaws remain open 6 months after discovery. In 2020, Veracode scanned over 11 trillion lines of code, helping to correct approximately 16 million flaws.
Reception
In 2013, Veracode ranked 20th on the Forbes list of the Top 100 Most Promising Companies in America. Veracode was named one of the "20 Coolest Cloud Security Vendors of the 2014 Cloud 100" by CRN Magazine. Gartner named Veracode as a Leader for eight consecutive years (2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, and 2021) in Gartner Magic Quadrant for Application Security Testing. Veracode also received the highest scores for enterprise and public-facing web applications in the Gartner Critical Capabilities for Application Security Testing. In October 2020, the company was recognized by Gartner Peer Insights as a 2020 Customers’ Choice for Application Security Testing. That same year, the company was also named a Gold Winner in the Cybersecurity Excellence Awards’ software category. Also in 2020, the company was honored by The Commonwealth Institute and The Boston Globe as the top women-led software business in Massachusetts. In 2021, Veracode was named a Leader in The Forrester Wave: Static Application Security Testing, Q1 2021 and won first-place in TrustRadius’ 2021 Best AppSec Feature Set and Best AppSec Customer Support categories.
Products
Veracode provides multiple software security analysis technologies on a single SaaS platform, including static analysis (or white-box testing), dynamic analysis (or black-box testing), and software composition analysis, all of which prevent software vulnerabilities like cross-site scripting (XSS) and SQL injection. In February 2020, Veracode launched DevSecOps and Veracode Security Labs. In July 2020, Veracode released a free edition of Veracode Security Labs which is accessible to anyone.
See also
List of tools for static code analysis
References
Further reading
Static program analysis tools
Software companies based in Massachusetts
American companies established in 2006
Software companies established in 2006
Computer security software companies
Computer security software
2006 establishments in Massachusetts
Companies based in Burlington, Massachusetts
CA Technologies
2017 mergers and acquisitions
2018 mergers and acquisitions
Private equity portfolio companies
Software companies of the United States |
562205 | https://en.wikipedia.org/wiki/Grady%20Booch | Grady Booch | Grady Booch (born February 27, 1955) is an American software engineer, best known for developing the Unified Modeling Language (UML) with Ivar Jacobson and James Rumbaugh. He is recognized internationally for his innovative work in software architecture, software engineering, and collaborative development environments.
Education
Booch earned his bachelor's degree in 1977 from the United States Air Force Academy and a master's degree in electrical engineering in 1979 from the University of California, Santa Barbara.
Career and research
Booch worked at Vandenberg Air Force Base after he graduated. He started as a project engineer and later managed ground-support missions for the space shuttle and other projects. After he gained his master's degree he became an instructor at the Air Force Academy.
Booch served as Chief Scientist of Rational Software Corporation from its founding in 1981 through its acquisition by IBM in 2003, where he continued to work until March 2008. After this he became Chief Scientist, Software Engineering in IBM Research and series editor for Benjamin Cummings.
Booch has devoted his life's work to improving the art and the science of software development. In the 1980s, he wrote one of the more popular books on programming in Ada. He is best known for developing the Unified Modeling Language with Ivar Jacobson and James Rumbaugh in the 1990s.
IBM 1130
Booch got his first exposure to programming on an IBM 1130.
... I pounded the doors at the local IBM sales office until a salesman took pity on me. After we chatted for a while, he handed me a Fortran [manual]. I'm sure he gave it to me thinking, "I'll never hear from this kid again." I returned the following week saying, "This is really cool. I've read the whole thing and have written a small program. Where can I find a computer?" The fellow, to my delight, found me programming time on an IBM 1130 on weekends and late-evening hours. That was my first programming experience, and I must thank that anonymous IBM salesman for launching my career. Thank you, IBM.
Booch method
Booch developed the Booch method of software development, which he presents in his 1991/94 book, Object Oriented Analysis and Design With Applications. He advises adding more classes to simplify complex code. The Booch method is a technique used in software engineering. It is an object modeling language and methodology that was widely used in object-oriented analysis and design. It was developed by Booch while at Rational Software.
The notation aspect of the Booch method has now been superseded by the Unified Modeling Language (UML), which features graphical elements from the Booch method along with elements from the object-modeling technique (OMT) and object-oriented software engineering (OOSE).
Methodological aspects of the Booch method have been incorporated into several methodologies and processes, the primary such methodology being the Rational Unified Process (RUP).
Design patterns
Booch is also an advocate of design patterns. For instance, he wrote the foreword to Design Patterns, an early and highly influential book in the field.
IBM Research - Almaden
He now is part of IBM Research - Almaden, serving as Chief Scientist for Software Engineering, where he continues his work on the Handbook of Software Architecture and also leads several projects in software engineering that are beyond the constraints of immediate product horizons. Grady continues to engage with customers working on real problems and maintains deep relationships with academia and other research organizations around the world. Grady has served as architect and architectural mentor for numerous complex software-intensive systems around the world in just about every domain imaginable.
Publications
Grady Booch published several articles and books. A selection:
Software Engineering with Ada.
Object Solutions: Managing the Object-Oriented Project.
The Unified Software Development Process. With Ivar Jacobson and James Rumbaugh.
The Complete UML Training Course. With James Rumbaugh and Ivar Jacobson.
The Unified Modeling Language Reference Manual, Second Edition. With James Rumbaugh and Ivar Jacobson.
The Unified Modeling Language User Guide, Second Edition. With James Rumbaugh and Ivar Jacobson.
Object-Oriented Analysis and Design with Applications.
Awards and honors
In 1995, Booch was inducted as a Fellow of the Association for Computing Machinery. He was named an IBM Fellow in 2003, soon after his entry into IBM, and assumed his current role on March 18, 2008. He was recognized as an IEEE Fellow in 2010. In 2012, the British Computer Society announced Booch would receive the Lovelace Medal and give the 2013 Lovelace Lecture. He gave the Turing Lecture in 2007. He was awarded the IEEE Computer Society Computer Pioneer award in 2016 for his pioneering work in Object Modeling that led to the creation of the Unified Modeling Language (UML).
References
External links
1955 births
Ada (programming language)
American computer scientists
Fellows of the Association for Computing Machinery
IBM employees
Living people
American software engineers
Unified Modeling Language
United States Air Force Academy alumni
University of California, Santa Barbara alumni
Fellow Members of the IEEE
IBM Fellows
IBM Research computer scientists
Open source advocates |
2034240 | https://en.wikipedia.org/wiki/Off%20the%20Hook%20%28radio%20program%29 | Off the Hook (radio program) | Off the Hook is a hacker-oriented weekly talk radio program, hosted by Emmanuel Goldstein, which focuses on the societal ramifications of information technology and the laws that regulate how people use them. It airs Wednesday nights at 8:00 p.m. Eastern Time in New York City on the community radio station WBAI 99.5 FM. It is also simulcast online via streaming MP3, rebroadcast on various other radio stations, and has been made available as a podcast (since long before that term was coined).
History
Premiere
Off the Hook first aired on Thursday, October 7, 1988. It was originally set to debut Friday, August 12, 1988, but a fire on the radio transmitter floor of the Empire State Building forced a postponement.
Notable events
Some notable events in the program's history include:
On November 30, 1999, journalist Amy Goodman reported live from the World Trade Organization protests, while being repeatedly approached by police and tear-gassed.
As an April Fool's Day prank in 2008, the crew faked a hack on Barack Obama's campaign website.
As an April Fool's Day prank in 2009, the show staged a mock shutdown and takeover of WBAI by a new country station. Rather than the show's intro, the hour opened with an apparent station sign-off followed by the introduction of "New York's New Radio Station," playing a "10,000 song marathon" to celebrate the birth of "Country 99.5". For 17 minutes WBAI broadcast to the Greater New York area as a country station.
Possible Conclusion
On November 13, 2012, it was announced that "Off the Hook" was possibly facing conclusion due to "2600"'s frustration with WBAI, as well as difficulty accessing the studio and its resources in the wake of Hurricane Sandy. However, new episodes have continued airing over WBAI.
Show format
After a quick presentation of the panelist(s) or on-air guest(s), the radio show normally starts with a report and discussion of the previous week's most interesting hacker, technology, and activist related news. Sometimes, it also features an interview with external guests.
Listener contributions
Toward the end of the program, Goldstein often reads listener e-mails and/or takes listener phone calls, time providing.
Listener calls vary from people commenting and asking questions about previously discussed topics to reporting their own news. Calls are taken in an unfiltered fashion, with callers being selected at random and immediately put on-air (although there is a seven-second delay). The show does not utilize a producer to screen for 'valid' calls before bringing them on-air. As such, it's not uncommon for callers to speak off topic, or seek help for a computer-related problem, possibly mistaking Off the Hook for the subsequent radio program on WBAI, The Personal Computer Show. It is also not uncommon for calls to be dropped, or for callers to hang up, much to the consternation of the show's hosts.
Since the show has an international audience, due to its streamed web presence and coverage of topics often of international interest, callers come from many countries in addition to the US.
Personalities
Many individuals, from across the hacker, activist, computer security, etc. communities, have played active roles in or appeared on the show over the years.
Emmanuel Goldstein has regularly hosted the show since its inception.
See also
2600
Gonzo journalism
Hackers on Planet Earth (HOPE) conference
Hacker (programmer subculture)
Hactivism
Phreaking
References
External links
Off the Hook official site
Show page at WBAI
Audio podcasts
2600: The Hacker Quarterly
Pacifica Foundation programs
Works about computer hacking |
22887067 | https://en.wikipedia.org/wiki/2009%E2%80%9310%20USC%20Trojans%20women%27s%20basketball%20team | 2009–10 USC Trojans women's basketball team | The 2009–10 USC Trojans women's basketball team represent the University of Southern California in the 2009–10 NCAA Division I women's basketball season. The Trojans are coached by Michael Cooper. The Trojans are a member of the Pacific-10 Conference and will attempt to win the NCAA championship.
Offseason
April 8: Head coach Mark Trakh resigned after guiding the Women of Troy for 5 seasons. Trakh had a 90–64 (.584) record. The Women of Troy won 20 games in 2005 and then 19 in 2006 as both advanced to the second round of the NCAA tourney. Four of his teams made it to the semifinals of the Pac-10 Tournament and had an 8–3 mark against crosstown rival UCLA. His players made various All-Pac-10 teams 20 times and Pac-10 All-Academics squads 14 times. He signed Top 10 recruiting classes the past 4 seasons, including the nation's No. 1 group in 2006, and 7 of his signees were named McDonald's All-Americans. This past season, the Women of Troy went 17–15 overall, tied for fourth in the Pac-10 with a 9–9 mark and made it to the Pac-10 Tournament final for the first time in history before losing to eventual NCAA Final Four participant Stanford.
April 9: USC senior point guard Camille LeNoir was selected in the second-round of the 2009 WNBA Draft. She was chosen by the Washington Mystics as the 23rd pick overall. LeNoir becomes the eighth Trojan to be selected in the WNBA Draf.
May 1: Los Angeles Sparks head coach and former Los Angeles Lakers great Michael Cooper has been named head coach of the USC women's basketball team, effective at the completion of the Sparks' 2009 season. Joining Cooper's USC staff will be long-time collegiate and high school assistant Ervin Monier, who will oversee the program as associate head coach until Cooper's arrival.
May 4: The Women of Troy will participate in the 2009 US Virgin Islands Paradise Jam at the University of Virgin Islands. The event is celebrating its tenth anniversary. Games will be played at UVI's Sports and Fitness Center, the Caribbean's premier basketball facility located in Charlotte Amalie, St. Thomas.
June 9:USC guard Jacki Gemelos has had her playing career delayed to a knee injury. Already the victim of three ligament tears that have kept her out of action for her first three seasons at USC, Gemelos has suffered another setback when she recently had surgery to replace the ACL graft in her left knee. Gemelos is expected to be sidelined from competition until January 2010.
Season summary
January 21, 2010 – Pacific-10 Conference issued a public reprimand to Michael Cooper for his post-game comments following USC's game with UCLA on Sunday, January 17.
Roster
Games
|-
!colspan=8| Non-Conference Regular Season Schedule
|-
!colspan=8| Pacific-10 Conference Regular Season Schedule
Player stats
Postseason
Pac-10 Basketball Tournament
See 2010 Pacific-10 Conference Women's Basketball Tournament
NCAA Basketball Tournament
Awards and honors
Team players drafted into the WNBA
See also
2009-10 USC Trojans men's basketball team
References
External links
Official Site
USC Trojans women's basketball seasons
Usc
USC Trojans
USC Trojans |
21318931 | https://en.wikipedia.org/wiki/Sebeka%20High%20School | Sebeka High School | Sebeka High School is a public high school in Sebeka, Minnesota, United States serving grades 7–12. The high school is part of Sebeka Public School (Independent School District 820), and is contained within the same building as the elementary school. Sebeka's mascot is the Trojan.
In addition to Sebeka, the school also serves the community of Nimrod.
Academics
Sebeka High School operates on an 8:17 am to 3:09 pm schedule. This includes 7 class periods and a break for lunch.
Sebeka offers two Advanced Placement courses: AP U.S. History and AP Calculus.
Extracurricular activities
Sports
Sebeka High School currently offers eleven sports, seven for boys (baseball, basketball, cross country, football, golf, track and field, and wrestling) and seven for girls (basketball, cheerleading, cross country, golf, softball, track and field, and volleyball).
In many of these sports (cross country, football, wrestling, track and field, golf, and cheerleading), Sebeka is in a cooperative with Menahga High School known as United North Central (UNC). The colors of the combined teams are black and gold, and they are known as the Warriors.
The Trojans' boys basketball team made its first ever trip to the state tournament in 2009. The team returned to the state tournament in 2010, and lost to Minnesota Transitions in the state championship game, 61–52. Three of the Trojans' five starters (Joey Cuperus, John Clark, and Alex Brockpahler) were named to the All-Tournament team.
The Trojans' baseball team also advanced to the state tournament in 2010, the team's third such appearance. Sebeka advanced to the Class A title game for the first time in school history before losing to Eden Valley-Watkins, 7-4.
Organizations and Academic Competitions
Sebeka High School students are also involved in a variety of extracurricular organizations and activities, including FFA, BPA, NHS, FCCLA, Knowledge Bowl, Speech, and the one-act play. In 2008, Sebeka's Knowledge Bowl team placed 3rd in the Class A state meet, while the team earned a 5th-place finish in 2009. The team came back to state in 2010 and won the Class A Tournament, scoring 162.5 points in the process. Sebeka also won the state meet in 2011.
Music
Band
Sebeka High School's Concert Band is currently directed by Mr. David Kerkvliet, who has held this position for 11 years. The band has received "Superior" ratings (the highest available ranking) at contest for 14 years in a row.
Choir
Sebeka High School's choir is directed by Mrs. Melissa Koch. In 2010, the choir received three "Superior" ratings from the three judges at contest.
Notable alumni
Dick Stigman, a Major League Baseball player.
Carrie Lee, who won the Miss Minnesota competition and was a Miss USA Competitor in 2005.
References
External links
Sebeka High School
1895 establishments in Minnesota
Educational institutions established in 1895
Public middle schools in Minnesota
Public high schools in Minnesota
Schools in Wadena County, Minnesota |
6062487 | https://en.wikipedia.org/wiki/Unix%20International | Unix International | Unix International (UI) was an association created in 1988 to promote open standards, especially the Unix operating system. Its most notable members were AT&T and Sun Microsystems, and in fact the commonly accepted reason for its existence was as a counterbalance to the Open Software Foundation (OSF), itself created in response to AT&T's and Sun's Unix partnership of that time. UI and OSF thus represented the two sides of the Unix wars in the late 1980s and early 1990s.
In May 1993, the major members of both UI and OSF announced the Common Open Software Environment (COSE) initiative. This was followed by the merging of UI and OSF into a "new OSF" in March 1994, which in turn merged with X/Open in 1996, forming The Open Group.
References
Chapter 11. OSF and UNIX International (Peter H. Salus, The Daemon, the GNU and the Penguin)
UI / OSF merger announcements
Standards organizations in the United States
Unix history |
18934755 | https://en.wikipedia.org/wiki/Video%20game%20console%20emulator | Video game console emulator | A video game console emulator is a type of emulator that allows a computing device to emulate a video game console's hardware and play its games on the emulating platform. More often than not, emulators carry additional features that surpass the limitations of the original hardware, such as broader controller compatibility, timescale control, greater performance, clearer quality, easier access to memory modifications (like GameShark), one-click cheat codes, and unlocking of gameplay features. Emulators are also a useful tool in the development process of homebrew demos and the creation of new games for older, discontinued, or rare consoles.
The code and data of a game are typically supplied to the emulator by means of a ROM file (a copy of game cartridge data) or an ISO image (a copy of optical media), which are created by either specialized tools for game cartridges, or regular optical drives reading the data. Most games retain their copyright despite the increasing time-span of the original system and products' discontinuation; this leaves regular consumers and emulation enthusiasts to resort to obtaining games freely across various internet sites rather than legitimately purchasing and ripping the contents (although for optical media, this is becoming popular for legitimate owners). As an alternative, specialized adapters such as the Retrode allow emulators to directly access the data on game cartridges without needing to copy it into a ROM image first.
History
By the mid-1990s, personal computers had progressed to the point where it was technically feasible to replicate the behavior of some of the earliest consoles entirely through software, and the first unauthorized, non-commercial console emulators began to appear. These early programs were often incomplete, only partially emulating a given system, resulting in defects. Few manufacturers published technical specifications for their hardware, which left programmers to deduce the exact workings of a console through reverse engineering. Nintendo's consoles tended to be the most commonly studied, for example the most advanced early emulators reproduced the workings of the Nintendo Entertainment System, the Super Nintendo Entertainment System, and the Game Boy. The first such recognized emulator was released around 1996, being one of the prototype projects that eventually merged into the SNES9X product. Programs like Marat Fayzullin's iNES, VirtualGameBoy, Pasofami (NES), Super Pasofami (SNES), and VSMC (SNES) were the most popular console emulators of this era. A curiosity was also Yuji Naka's unreleased NES emulator for the Genesis, possibly marking the first instance of a software emulator running on a console. Additionally, as the Internet gained wider availability, distribution of both emulator software and ROM images became more common, helping to popularize emulators.
Legal attention was drawn to emulations with the release of UltraHLE, an emulator for the Nintendo 64 released in 1999 while the Nintendo 64 was still Nintendo's primary console - its next console, the GameCube, would not be released until 2001. UltraHLE was the first emulator to be released for a current console, and it was seen to have some effect on Nintendo 64 sales, though to what degree compared with diminishing sales on the aging consoles was not clear. Nintendo pursued legal action to stop the emulator project, and while the original authors ceased development, the project continued by others who had gotten the source code. Since then, Nintendo has generally taken the lead in actions against emulation projects or distributions of emulated games from their consoles compared to other console or arcade manufacturers.
This rise in popularity opened the door to foreign video games, and exposed North American gamers to Nintendo's censorship policies. This rapid growth in the development of emulators in turn fed the growth of the ROM hacking and fan-translation. The release of projects such as RPGe's English language translation of Final Fantasy V drew even more users into the emulation scene.
Methods
Emulators can be designed in three ways: purely operating in software which is the most common form such as MAME using arcade ROM images; purely operating in hardware such as the ColecoVision's adapter to accept Atari VCS cartridges, and hybrid solutions, such as hardware bridgeboards for various Amiga computers that could run IBM PC-compatible software.
An emulator is created typically through reverse engineering of the hardware information as to avoid any possible conflicts with non-public intellectual property. Some information may be made public for developers on the hardware's specifications which can be used to start efforts on emulation but there are often layers of information that remain as trade secrets such as encryption details. Operating code stored in the hardware's BIOS may be disassembled to be analyzed in a clean room design, with one person performing the disassembling and another person, separately, documenting the function of the code. Once enough information of how the hardware interprets the game software, an emulation on the target hardware can then be constructed. Emulation developers typically avoid any information that may come from untraceable sources to avoid contaminating the clean room nature of their project. For example, in 2020, a large trove of information related to Nintendo's consoles was leaked, and teams working on Nintendo console emulators such as the Dolphin emulator for GameCube and Wii stated they were staying far away from the leaked information to avoid tainting their project.
Once an emulator is written, it then requires a copy of the game software to be obtained, a step that may have legal consequences. Typically, this requires the user to make a copy of the contents of the ROM cartridge to computer files or images that can be read by the emulator, a process known as "dumping" the contents of the ROM. A similar concept applies to other proprietary formats, such as for PlayStation CD games. While not required for emulation of the earliest arcade or home console, most emulators also require a dump of the hardware's BIOS, which could vary with distribution region and hardware revisions. In some cases, emulators allow for the application of ROM patches which update the ROM or BIOS dump to fix incompatibilities with newer platforms or change aspects of the game itself. The emulator subsequently uses the BIOS dump to mimic the hardware while the ROM dump (with any patches) is used to replicate the game software.
Perspectives
Outside of official usage, emulation has generally been seen negatively by video game console manufacturers and game developers. The largest concern is nature of copyright infringement related to ROM images of games, typically distributed freely and without hardware restrictions. While this directly impacts potential sales of emulated games and thus the publishers and developers, the nature of the value chain of the industry can lead to potential financial harm to console makers. Further, emulation challenges the industry's use of the razorblade model for console games, where consoles are sold near cost and revenue instead obtained from licenses on game sales. With console emulation being developed even while consoles are still on the market, console manufacturers are forced to continue to innovate, bring more games for their systems to market, and move quickly onto new technology to continue their business model. There are further concerns related to intellectual property of the console's branding and of games' assets that could be misused, though these are issues less with emulation itself but with how the software is subsequently used.
Alternatively, emulation is seen to enhance video game preservation efforts, both in shifting game information from outdated technology into newer, more persistent formats, and providing software or hardware alternates to aged hardware. Some users of emulation also see emulation as means to preserve games from companies that have long-since gone bankrupt or disappeared from the industry's earlier market crash and contractions, and where ownership of the property is unclear. Emulation can also be seen as a means to enhance functionality of the original game that would otherwise not be possible, such as adding in localizations via ROM patches or new features such as save states.
Legal issues
United States
As computers and global computer networks continued to advance and become more popular, emulator developers grew more skilled in their work, the length of time between the commercial release of a console and its successful emulation began to shrink. Fifth generation consoles such as Nintendo 64, PlayStation and sixth generation handhelds, such as the Game Boy Advance, saw significant progress toward emulation during their production. This led to an effort by console manufacturers to stop unofficial emulation, but consistent failures such as Sega v. Accolade 977 F.2d 1510 (9th Cir. 1992), Sony Computer Entertainment, Inc. v. Connectix Corporation 203 F.3d 596 (2000), and Sony Computer Entertainment America v. Bleem 214 F.3d 1022 (2000), have had the opposite effect, which has ruled that emulators, developed through clean room design, are legal. The Librarian of Congress, under the Digital Millennium Copyright Act (DMCA), has codified these rules as allowed exemptions to bypass technical copyright protections on console hardware. However, emulator developers cannot incorporate code that may have been embedded within the hardware BIOS, nor ship the BIOS image with their emulators.
Unauthorized distribution of copyrighted code remains illegal, according to both country-specific copyright and international copyright law under the Berne Convention. Accordingly, video game publishers and developers have taken legal action against websites that illegally redistribute their copyrighted software, successfully forcing sites to remove their titles or taking down the websites entirely.
Under United States law, obtaining a dumped copy of the original machine's BIOS is legal under the ruling Lewis Galoob Toys, Inc. v. Nintendo of America, Inc., 964 F.2d 965 (9th Cir. 1992) as fair use as long as the user obtained a legally purchased copy of the machine. To mitigate this however, several emulators for platforms such as Game Boy Advance are capable of running without a BIOS file, using high-level emulation to simulate BIOS subroutines at a slight cost in emulation accuracy.
Impersonation by malware
Due to their popularity, emulators have also been a target of online scams in the form of trojan horse programs designed to mimic the appearance of a legitimate emulator, which are then promoted through spam, on YouTube and elsewhere. Some scams, such as the purported "PCSX4" emulator, have even gone so far as to setting up a fake GitHub repository, presumably for added trustworthiness especially to those unfamiliar with open-source software development. The Federal Trade Commission has since issued an advisory warning users to avoid downloading such software, in response to reports of a purported Nintendo Switch emulator released by various websites as a front for a survey scam.
Official use
Due to the high demand of playing old games on modern systems, consoles have begun incorporating emulation technology. The most notable of these is Nintendo's Virtual Console. Originally released for the Wii, but present on the 3DS and Wii U, Virtual Console uses software emulation to allow the purchasing and playing of games for old systems on this modern hardware. Though not all games are available, the Virtual Console has a large collection of games spanning a wide variety of consoles. The Virtual Console's library of past games currently consists of titles originating from the Nintendo Entertainment System, Super NES, Game Boy, Game Boy Color, Nintendo 64, Game Boy Advance, Nintendo DS, and Wii, as well as Sega's Master System and Genesis/Mega Drive, NEC's TurboGrafx-16, and SNK's Neo Geo. The service for the Wii also includes games for platforms that were known only in select regions, such as the Commodore 64 (Europe and North America) and MSX (Japan), as well as Virtual Console Arcade, which allows players to download video arcade games. Virtual Console titles have been downloaded over ten million times. Each game is distributed with a dedicated emulator tweaked to run the game as well as possible. However, it lacks the enhancements that unofficial emulators provide, and many titles are still unavailable.
Until the 4.0.0 firmware update, the Nintendo Switch system software contained an embedded NES emulator, referred to internally as "flog", running the game Golf (with motion controller support using Joy-Con). The Easter egg was believed to be a tribute to former Nintendo president Satoru Iwata, who died in 2015: the game was only accessible on July 11 (the date of his death), Golf was programmed by Iwata, and the game was activated by performing a gesture that Iwata had famously used during Nintendo's video presentations. It was suggested that the inclusion of Golf was intended as a digital form of omamori—a traditional form of Japanese amulets intended to provide luck or protection. As part of its Nintendo Switch Online subscription service, Nintendo subsequently released an app featuring an on-demand library of NES and SNES titles updated regularly. The app features similar features to Virtual Console titles, including save states, as well as a pixel scaler mode and an effect that simulates CRT television displays.
Due to differences in hardware, the Xbox 360 is not natively backwards compatible with original Xbox games. However, Microsoft achieved backwards compatibility with popular titles through an emulator. On June 15, 2015, Microsoft announced the Xbox One would be backwards compatible with Xbox 360 through Emulation. In June 2017, they announced Xbox original titles would also be available for backwards compatibility through emulation, but because the Xbox original runs on the x86 architecture, CPU emulation is unnecessary, greatly improving performance. The PlayStation 3 uses software emulation to play original PlayStation titles, and the PlayStation Store sells games that run through an emulator within the machine. In the original Japanese and North American 60GB models, original PS2 hardware is present to run titles; however all PAL models, and later models released in Japan and North America removed some PS2 hardware components, replacing it with software emulation working alongside the video hardware to achieve partial hardware/software emulation. In later releases, backwards compatibility with PS2 titles was completely removed along with the PS2 graphics chip, and eventually Sony released PS2 titles with software emulation on the PlayStation Store.
Commercial developers have also used emulation as a means to repackage and reissue older games on newer consoles in retail releases. For example, Sega has created several collections of Sonic the Hedgehog games. Before the Virtual Console, Nintendo also used this tactic, such as Game Boy Advance re-releases of NES titles in the Classic NES Series.
Other uses
Although the primary purpose of emulation is to make older video-games execute on newer systems, there are several advantages inherent in the extra flexibility of software emulation that were not possible on the original systems.
ROM hacking and modification
Disk image loading is a necessity for most console emulators, as most computing devices do not have the hardware required to run older console games directly from the physical game media itself. Even with optical media system emulators such as the PlayStation and PlayStation 2, attempting to run games from the actual disc may cause problems such as hangs and malfunction as PC optical drives are not designed to spin discs the way those consoles do. This, however, has led to the advantage of it being far easier to modify the actual game's files contained within the game ROMs. Amateur programmers and gaming enthusiasts have produced translations of foreign games, rewritten dialogue within a game, applied fixes to bugs that were present in the original game, as well as updating old sports games with modern rosters. It is even possible to use high-resolution texture pack upgrades for 3-D games and sometimes 2-D if available and possible.
Enhanced technical features
Software that emulates a console can be improved with additional capabilities that the original system did not have. These include Enhanced graphical capabilities, such as spatial anti-aliasing, upscaling of the framebuffer resolution to match high definition and even higher display resolutions, as well as anisotropic filtering (texture sharpening).
Emulation software may offer improved audio capabilities (e.g. decreased latency and better audio interpolation), enhanced save states (which allow the user to save a game at any point for debugging or re-try) and decreased boot and loading times. Some emulators feature an option to "quickly" boot a game, bypassing the console manufacturer's original splash screens.
Furthermore, emulation software may offer online multiplayer functionality and the ability to speed up and slow down the emulation speed. This allows the user to fast-forward through unwanted cutscenes for example, or the ability to disable the framelimiter entirely (useful for benchmarking purposes).
Bypassing regional lockouts
Some consoles have a regional lockout, preventing the user from being able to play games outside of the designated game region. This can be considered a nuisance for console gamers as some games feature seemingly inexplicable localization differences between regions, such as differences in the time requirements for driving missions and license tests on Gran Turismo 4, and the PAL version of Final Fantasy X which added more ingame skills, changes to some bosses, and even more bosses, Dark Aeons, that weren't available in the American NTSC release of the game.
Although it is usually possible to modify the consoles themselves to bypass regional lockouts, console modifications can cause problems with screens not being displayed correctly and games running too fast or slow, due to the fact that the console itself may not be designed to output to the correct format for the game. These problems can be overcome on emulators, as they are usually designed with their own output modules, which can run both NTSC and PAL games without issue.
Cheating and widescreen functionality
Many emulators, for example Snes9x, make it far easier to load console-based cheats, without requiring potentially expensive proprietary hardware devices such as those used by GameShark and Action Replay. Freeware tools allow codes given by such programs to be converted into code that can be read directly by the emulator's built-in cheating system, and even allow cheats to be toggled from the menu. The debugging tools featured in many emulators also aid gamers in creating their own such cheats.
Similar systems can also be used to enable Widescreen Hacks for certain games, allowing the user to play games which were not originally intended for widescreen, without having to worry about aspect ratio distortion on widescreen monitors.
See also
List of video game emulators
Notes
References
Computer and video game platform emulators |
48114377 | https://en.wikipedia.org/wiki/Universal%20Windows%20Platform | Universal Windows Platform | Universal Windows Platform (UWP) is a computing platform created by Microsoft and first introduced in Windows 10. The purpose of this platform is to help develop universal apps that run on Windows 10, Windows 10 Mobile, Windows 11, Xbox One, Xbox Series X/S and HoloLens without the need to be rewritten for each. It supports Windows app development using C++, C#, VB.NET, and XAML. The API is implemented in C++, and supported in C++, VB.NET, C#, F# and JavaScript. Designed as an extension to the Windows Runtime (WinRT) platform first introduced in Windows Server 2012 and Windows 8, UWP allows developers to create apps that will potentially run on multiple types of devices.
UWP does not target non-Microsoft systems. Microsoft's solution for other platforms is .NET MAUI (previously "Xamarin.Forms"), an open-source API created by Xamarin, a Microsoft subsidiary since 2016. Community solutions also exist for non-targeted platforms, such as the Uno Platform.
Compatibility
UWP is a part of Windows 10, Windows 10 Mobile and Windows 11. UWP apps do not run on earlier Windows versions.
Apps that are capable of implementing this platform are natively developed using Visual Studio 2015, Visual Studio 2017 or Visual Studio 2019. Older Metro-style apps for Windows 8.1, Windows Phone 8.1 or for both (universal 8.1) need modifications to migrate to UWP.
Some Windows platform features in later versions have been exclusive to UWP and software specifically packaged for it, and are not usable in other architectures such as the existing WinAPI, WPF, and Windows Forms. However, as of 2019, Microsoft has taken steps to increase the parity between these application platforms and make UWP features usable inside non-UWP software. Microsoft introduced XAML Islands (a method for embedding UWP controls and widgets into non-UWP software) as part of the Windows 10 May 2019 update, and stated that it would also allow UWP functions and Windows Runtime components to be invoked within non-packaged software.
API bridges
UWP Bridges translate calls in other application programming interfaces (APIs) to the UWP interface, so that applications written in these APIs would run on UWP. Two bridges are announced during the 2015 Build keynote for Android and iOS apps to be ported to Windows 10 Mobile. , Microsoft maintains support for bridges for Windows desktop apps, progressive web apps, Microsoft Silverlight, and iOS's Cocoa Touch API.
iOS
Windows Bridge for iOS (codenamed "Islandwood") is an open-source middleware toolkit that allows iOS apps developed in Objective-C to be ported to Windows 10 by using Visual Studio 2015 to convert the Xcode project into a Visual Studio project. An early build of Windows Bridge for iOS was released as open-source software under the MIT License on August 6, 2015, while the Android version was in closed beta.
This "WinObjC" project is open source on GitHub. It contains code from various existing implementations of Cocoa Touch like Cocotron and GNUstep as well as Microsoft's own code that implements iOS frameworks using UWP methods. It uses a version of the LLVM clang compiler.
Android
Windows Bridge for Android (codenamed "Astoria") was a runtime environment that would allow for Android apps written in Java or C++ to run on Windows 10 Mobile and published to Microsoft Store. Kevin Gallo, technical lead of Windows Developer Platform, explained that the layer contained some limitations: Google Mobile Services and certain core APIs are not available, and apps that have "deep integration into background tasks", such as messaging software, would not run well in this environment.
In February 2016, Microsoft announced that it had ceased development on Windows Bridge for Android, citing redundancies due to iOS already being a primary platform for multi-platform development, and that Windows Bridge for iOS produced native code and did not require an OS-level emulator. Instead, Microsoft encouraged the use of C# for multi-platform app development using tools from Xamarin, which they had acquired prior to the announcement.
Deployment
UWP provides an application model based upon its CoreApplication class and the Windows Runtime (WinRT). Universal Windows apps that are created using the UWP no longer indicate having been written for a specific OS in their manifest build; instead, they target one or more device families, such as a PC, smartphone, tablet, or Xbox One, using Universal Windows Platform Bridges. These extensions allow the app to automatically utilize the capabilities that are available to the particular device it is currently running on. A universal app may run on either a mobile phone or a tablet and provide suitable experiences on each. A universal app running on a smartphone may start behaving the way it would if it were running on a PC when the phone is connected to a desktop computer or a suitable docking station.
Reception
Games developed for UWP are subject to technical restrictions, including incompatibility with multi-video card setups, difficulties modding the game, overlays for gameplay-oriented chat clients, or key binding managers. UWP will only support DirectX 11.1 or later, so games built on older DirectX versions will not work. During Build 2016, Microsoft Xbox division head Phil Spencer announced that the company was attempting to address issues which would improve the viability of UWP for PC games, stating that Microsoft was "committed to ensuring we meet or exceed the performance expectations of full-screen games as well as the additional features including support for overlays, modding, and more." Support for AMD FreeSync and Nvidia G-Sync technologies, and disabling V-sync, was later added to UWP.
Epic Games founder Tim Sweeney criticized UWP for being a walled garden, since by default UWP software may only be published and installed via Windows Store, requiring changes in system settings to enable the installation of external software (similarly to Android). Additionally, certain operating system features are exclusive to UWP and cannot be used in non-UWP software such as most video games. Sweeney characterized these moves as "the most aggressive move Microsoft has ever made" in attempting to transform PCs into a closed platform, and felt that these moves were meant to put third-party games storefronts such as Steam at a disadvantage as Microsoft is "curtailing users' freedom to install full-featured PC software and subverting the rights of developers and publishers to maintain a direct relationship with their customers". As such, Sweeney argued that end-users should be able to download UWP software and install it in the same manner as non-UWP software.
Windows VP Kevin Gallo addressed Sweeney's concerns, stating that "in the Windows 10 November Update, we enabled people to easily side-load apps by default, with no UX required. We want to make Windows the best development platform regardless of technologies used, and offer tools to help developers with existing code bases of HTML/JavaScript, .NET and Win32, C++ and Objective-C bring their code to Windows, and integrate UWP capabilities. With Xamarin, UWP developers can not only reach all Windows 10 devices, but they can now use a large percentage of their C# code to deliver a fully native mobile app experiences for iOS and Android."
In a live interview with Giant Bomb during its E3 2016 coverage, Spencer defended the mixed reception of its UWP-exclusive releases, stating that "they all haven't gone swimmingly. Some of them have gone well", and that "there's still definitely concern that UWP and our store are somehow linked in a way that is nefarious. It's not." He also discussed Microsoft's relationships with third-party developers and distributors such as Steam, considering the service to be "a critical part of gaming's success on Windows" and stating that Microsoft planned to continue releasing games through the platform as well as its own, but that "There's going to be areas where we cooperate and there's going to be areas where we compete. The end result is better for gamers." Spencer also stated that he was a friend of Sweeney and had been in frequent contact with him.
On May 30, 2019, Microsoft announced that it would support distribution of Win32 games on Microsoft Store; Spencer (who had since been promoted to head of all games operations at Microsoft, reporting directly to CEO Satya Nadella) explained that developers preferred the architecture, and that it "allow[s] for the customization and control [developers and players] come to expect from the open Windows gaming ecosystem." It was also announced that future Xbox Game Studios releases on Windows would be made available on third-party storefronts such as Steam, rather than be exclusive to Microsoft Store.
References
External links
Guide to Universal Windows Platform (UWP) apps
Comparison of UWP, Android, and iOS from a Programmer's point of view
.NET
Windows APIs
Windows technology
Xbox One
Microsoft application programming interfaces |
1233983 | https://en.wikipedia.org/wiki/Tobacco%20packaging%20warning%20messages | Tobacco packaging warning messages | Tobacco package warning messages are warning messages that appear on the packaging of cigarettes and other tobacco products concerning their health effects. They have been implemented in an effort to enhance the public's awareness of the harmful effects of smoking. In general, warnings used in different countries try to emphasize the same messages. Warnings for some countries are listed below. Such warnings have been required in tobacco advertising for many years, with the earliest mandatory warning labels implemented in Iceland in 1969. Implementing tobacco warning labels has been strongly opposed by the tobacco industry, most notably in Australia following the implementation of plain packaging laws.
The WHO Framework Convention on Tobacco Control, adopted in 2003, requires such package warning messages to promote awareness against smoking.
A 2009 review summarises that "There is clear evidence that tobacco package health warnings increase consumers' knowledge about the health consequences of tobacco use." The warning messages "contribute to changing consumers' attitudes towards tobacco use as well as changing consumers' behavior."
At the same time, such warning labels have been subject to criticism. 2007 meta-analyses indicated that communications emphasizing the severity of a threat are less effective than communications focusing on susceptibility, and that warning labels may have no effect among smokers who are not confident that they can quit, which lead the authors to recommend exploring different, potentially more effective methods of behavior change. In many countries, a variety of warnings with graphic, disturbing images of tobacco-related harms (including X and Y) are placed prominently on cigarette packages.
Albania
Text-based warnings on cigarette packets are used in Albania.
Pirja e duhanit mund të vrasë
Smoking can kill
Pirja e duhanit ju dëmton ju dhe të tjerët rreth jush
Smoking seriously harms you and others around you
Duhanpirësit vdesin më të rinj
Smokers die younger
Duhanpirja bllokon arteriet dhe shkakton infarkt të zemrës ose hemorragji cerebrale
Smoking blocks the arteries and causes heart attack or cerebral hemorrhage
Duhanpirja shkakton kancer në mushkëri
Smoking causes lung cancer
Duhanpirja gjatë shtatëzanisë dëmton fëmijën tuaj
Smoking during pregnancy harms your baby
Ruani fëmijët: mos i lini ata të thithin tymin tuaj
Protect children: do not let them breathe your smoke
Mjeku ose farmacisti juaj mund t'ju ndihmojë të lini duhanin
Your doctor or pharmacist can help you stop smoking
Duhanpirja shkakton varësi të fortë, mos e filloni atë
Smoking causes a strong addiction, do not start it
Lënia e duhanit pakëson rrezikun e sëmundjeve vdekjeprurëse të zemrës dhe mushkërive
Leaving smoking reduces the risk of deadly heart disease and lung
Duhanpirja mund të shkaktojë një vdekje të ngadaltë dhe të dhimbshme
Smoking can cause a slow and painful death
Kërkoni ndihmë për të lënë duhanin: telefon 0 800 47 47; Sektori i abuzimit me substancat, Instituti i Shëndetit Publik: www.ishp.gov.al; konsultonhuni me mjekun/farmacistin tuaj
Ask for help to quit smoking: telephone 0800 47 47; Substance Abuse sector, the Institute of Public Health: www.ishp.gov.al; consult your doctor/pharmacist
Duhanpirja mund të ngadalësojë rrjedhjen e gjakut dhe shkakton impotencë
Smoking can slow blood flow and causes impotence
Duhanpirja shkakton plakjen e fytyrës
Smoking causes facial aging
Duhani përmban benzen, nitrosaminë, formalinë dhe cianid hidrogjeni
Smoking contains benzene, nitrosamines, formalin and hydrogen cyanide
Pirja e duhanit shkakton sëmundje të zemrës
Smoking causes heart disease
Duhanpirja mund të dëmtojë spermën dhe ul pjellorinë
Smoking can damage sperm and decrease fertility
Duhani dëmton rëndë shëndetin
Smoking seriously harms health
Argentina
General warning:
As of 30 January 2013, all cigarette packages must include graphical warning labels that show the detrimental effects on the health of long-term smokers.
Translation of words in box:
Australia
On 1 December 2012, Australia introduced groundbreaking legislation and the world's toughest tobacco packaging warning messages to date. All marketing and brand devices were removed from the package and replaced with warnings, only the name of the product remain in generic standard sized text. All tobacco products sold, offered for sale or otherwise supplied in Australia were plain packaged and labelled with new and expanded health warnings.
Azerbaijan
In Azerbaijan, cigarette packages carry a small notice: "Ministry of Health warns: Smoking is dangerous for your health", with no mandate on a minimum required size of the warning and a typical warning occupying around 6% of each side of the packaging. A specific health warning, as in a detailed description of the health effects of smoking, and variety in the warnings used are not required by Azerbaijani law alongside no mandate for imagery.
Bangladesh
The government of Bangladesh is determined in the control of tobacco. In the concluding ceremony of the South Asian Speakers' Summit, titled 'Achieving Sustainable Development Target Level' held in Dhaka in 2016, the Honorable Prime Minister announced complete elimination of tobacco use from Bangladesh by 2040. Bangladesh is the first country to sign the FCTC (Framework Convention on Tobacco Control).
The Bangladesh government has revised the use of the Smoking and Tobacco Products (Control) Act, 2005 in 2013 and has framed the amended law in 2015. According to the revised law and rules, 50% of pictorial health warnings are implemented in the packets of all tobacco products from 19 March 2016.
Bolivia
In Bolivia, a variety of warnings with graphic, disturbing images of tobacco-related harms (including a laryngeal cancer and heart attack) are placed prominently on cigarette packages.
Bosnia and Herzegovina
Front of packaging (covers 30% of surface):
Pušenje je štetno za zdravlje (Smoking is harmful to health)
Pušenje ubija (Smoking kills)
Pušenje ozbiljno šteti vama i drugima oko vas (Smoking seriously harms you and others around you)
Back of packaging (covers 50% of surface):
Pušenje uzrokuje rak pluća (Smoking causes lung cancer)
Pušenje uzrokuje srčani udar (Smoking causes heart attack)
Pušenje uzrokuje moždani udar (Smoking causes stroke)
Pušenje u trudnoći šteti zdravlju Vašeg djeteta (Smoking while pregnant harms your child)
Before 2011, a small warning with the text Pušenje je štetno za zdravlje (Smoking is harmful to health) was printed on the back of cigarette packets.
Brazil
Brazil was the second country in the world and the first country in Latin America to adopt mandatory warning images in cigarette packages. Warnings and graphic images illustrating the risks of smoking occupy 100% of the back of cigarettes boxes since 2001. In 2008, the government enacted a third batch of images, aimed at younger smokers.
Since 2003, the sentence
is displayed in all packs.
Brunei
In Brunei, a variety of warnings with graphic, disturbing images of tobacco-related harms (including a tracheotomy and rotting teeth) are placed prominently on cigarette packages.
Cambodia
In Cambodia, a variety of warnings with graphic, disturbing images of tobacco-related harms (including a premature birth and lung cancer) are placed prominently on cigarette packages.
Canada
Canada has had three phases of tobacco warning labels. The first set of warnings was introduced in 1989 under the Tobacco Products Control Act, and required warnings to be printed on all tobacco products sold legally in Canada. The set consisted of four messages printed in black-and-white on the front and back of the package, and was expanded in 1994 to include eight messages covering 25% of the front top of the package. In 2000, the Tobacco Products Information Regulations (TPIR) were passed under the Tobacco Act. The regulations introduced a new set of sixteen warnings. Each warning was printed on the front and back of the package, covering 50% of the surface, with a short explanation and a picture illustrating that particular warning, for example:
accompanied by a picture of a human lung detailing cancerous growths.
Additionally, on the inside of the packaging or, for some packets, on a pull-out card, "health information messages" provide answers and explanations regarding common questions and concerns about quitting smoking and smoking-related illnesses. The side of the package also featured information on toxic emissions and constituent levels.
In 2011, the TPIR were replaced for cigarettes and little cigars with the Tobacco Products Labelling Regulations (Cigarettes and Little Cigars). These regulations introduced the third and current set of 16 warnings in Canada. Currently, cigarette and little cigar packages in Canada must bear new graphic warning messages that cover 75% of the front and back of the package. The interior of each package contains 1 of 8 updated health warning messages, all including the number for a national quitline. The side of the package now bears 1 of 4 simplified toxic emission statements. These labels were fully implemented on cigarette and little cigar packages by June 2012 (though the 2000 labels still appear on other tobacco products). Canada also prohibits terms such as "light" and "mild" from appearing on tobacco packaging. The current labels were based on extensive research and a long consultation process that sought to evaluate and improve upon the warnings introduced in 2000.
In accordance with Canadian law regarding products sold legally in Canada, the warnings are provided in both English and French. Imported cigarettes to be sold in Canada which do not have the warnings are affixed with sticker versions when they are sold legally in Canada.
Health Canada considered laws mandating plain packaging, legal tobacco product packaging did still include warning labels, but brand names, fonts, and colors were replaced with simple unadorned text, thereby reducing the impact of tobacco industry marketing techniques.
There have been complaints from some Canadians due to the graphic nature of the labels. It was mandated in January 2020.
Chile
Starting in November 2006, all cigarette packages sold in Chile are required to have one of two health warnings, a graphic pictorial warning or a text-only warning. These warnings are replaced with a new set of two warnings each year.
China
Under laws of the People's Republic of China, "Law on Tobacco Monopoly" (中华人民共和国烟草专卖法) Chapter 4 Article 18 and "Regulations for the Implementation of the Law on Tobacco Monopoly" (中华人民共和国烟草专卖法实施条例) Chapter 5 Article 29, cigarettes and cigars sold within the territory of China should indicate the grade of tar content and "Smoking is hazardous to your health" (吸烟有害健康) in the Chinese language on the packs and cartons.
In 2009, the warnings were changed. The warnings which occupy not less than 30% of the front and back of cigarettes boxes shows "吸烟有害健康 尽早戒烟有益健康" (Smoking is harmful to your health. Quitting smoking early is good for your health) in the front, and "吸烟有害健康 戒烟可减少对健康的危害" ('Smoking is harmful to your health. Quitting smoking can reduce health risks) in the back.
The warnings were revised in October 2016. The warnings must occupy at least 35% of the front and back of cigarette boxes. The following are the current warnings.
In the front:
"本公司提示吸烟有害健康请勿在禁烟场所吸烟"
("Our company notes that smoking is harmful to health. Do not smoke in non-smoking areas" in Chinese)
In the back:
尽早戒烟有益健康戒烟可减少对健康的危害"
("Quitting smoking early is good for your health. Quitting smoking can reduce risks to health" in Chinese)
or
劝阻青少年吸烟 禁止中小学生吸烟
("Dissuade teenagers from smoking. Prohibit primary and middle school students from smoking." in Chinese)
Colombia
In Colombia, a variety of warnings with graphic, disturbing images of tobacco-related harms (including a smoking hurts your arteries and bladder cancer) are placed prominently on cigarette packages.
Costa Rica
In Costa Rica, a variety of warnings with graphic, disturbing images of tobacco-related harms (including a lung cancer and heart attack) are placed prominently on cigarette packages.
East Timor
Before 2018
Starting from 2018, a variety of warnings with images of tobacco-related harms, including heart attack and male impotence, are placed prominently on cigarette packages. Graphic warning messages must consist of 85% of the front of cigarette packages and 100% of the back. After the introduction of graphic images in East Timorese cigarette packaging, the branding of cigarettes as "light", "mild", etc. is forbidden.
FUMA OHO ITA (Smoking kills)
FUMA KAUZA IMPOTÉNSIA (Smoking can cause impotence)
FUMA PROVOKA MOAS FUAN (Smoking can cause heart attack)
FUMA KAUZA ABORTU (Smoking can cause abortion)
FUMA PROVOKA KANKRU (Smoking can cause cancer)
FUMA KAUZA PULMAUN KRONIKU (Smoking can cause throat cancer)
Konsulta atu para fuma: Numero telf: 113 (Quit smoking consultation: Phone number: 113)
Ecuador
In Ecuador, a variety of warnings with graphic, disturbing images of tobacco-related harms (including a tongue cancer and premature birth) are placed prominently on cigarette packages.
Egypt
In Egypt, a variety of warnings with graphic, disturbing images of tobacco-related harms (including a mouth cancer and gangrene) are placed prominently on cigarette packages.
European Union
Cigarette packets and other tobacco packaging must include warnings in the same size and format and using the same approved texts (in the appropriate local languages) in all member states of the European Union.
These warnings are displayed in black Helvetica bold on a white background with a thick black border. Ireland once prefaced its warnings with "Irish Government Warning", Latvia with "Veselības ministrija brīdina" (Health Ministry Warning) and Spain with "Las Autoridades Sanitarias Advierten" ("The Health Board Warns"). In member states with more than one official language, the warnings are displayed in all official languages, with the sizes adjusted accordingly (for example in Belgium the messages are written in Dutch, French and German, in Luxembourg in French and German and in Ireland, in Irish and English). All cigarette packets sold in the European Union must display the content of nicotine, tar, and carbon monoxide in the same manner on the side of the packet.
In 2003, it was reported that sales of cigarette cases had surged, attributable to the introduction of more prominent warning labels on cigarette packs by an EU directive in January 2003. Alternatively, people choose to hide the warnings using various arguably "funny" stickers, such as "You could be hit by a bus tomorrow."
The most recent EU legislation is the Tobacco Products Directive, which became applicable in EU countries in May 2016.
front, takes up 30%
reverse, takes up 40%
side, nicotine, tar, and carbon monoxide measurement
Austria and Germany
General warnings
Rauchen fügt Ihnen und den Menschen in Ihrer Umgebung erheblichen Schaden zu. – Smoking severely harms you and the people around you.
Additional warnings
Raucher sterben früher – Smokers die sooner.
Rauchen führt zur Verstopfung der Arterien und verursacht Herzinfarkte und Schlaganfälle. – Smoking leads to clogging of arteries and causes heart attacks and strokes.
Rauchen verursacht tödlichen Lungenkrebs. – Smoking causes lethal lung cancer.
Rauchen in der Schwangerschaft schadet Ihrem Kind – Smoking while pregnant harms your child.
Schützen Sie Kinder – lassen Sie sie nicht Ihren Tabakrauch einatmen! – Protect children – don't let them inhale your tobacco smoke!
Ihr Arzt oder Apotheker kann Ihnen dabei helfen, das Rauchen aufzugeben. – Your doctor or pharmacist can help you to give up smoking.
Rauchen macht sehr schnell abhängig: Fangen Sie gar nicht erst an! – Smoking makes you addicted very fast: Don't start in the first place!
Wer das Rauchen aufgibt, verringert das Risiko tödlicher Herz- und Lungenerkrankungen – Giving up smoking reduces the risk of fatal heart and lung diseases.
Rauchen kann zu einem langsamen und schmerzhaften Tod führen – Smoking can lead to a slow and painful death.
Rauchen kann zu Durchblutungsstörungen führen und verursacht Impotenz – Smoking can lead to blood circulation disorders and causes impotence.
Rauchen lässt Ihre Haut altern – Smoking ages your skin.
Rauchen kann die Spermatozoen schädigen und schränkt die Fruchtbarkeit ein – Smoking can damage the spermatozoa and decreases your fertility.
Rauch enthält Benzol, Nitrosamine, Formaldehyd und Blausäure – Smoke contains benzene, nitrosamine, formaldehyde and hydrogen cyanide.
Belgium
In Belgium, warning signs are written in all three official languages of Belgium. These three languages are Dutch, French and German.
Croatia
Front of packaging (covers 30% of surface):
or
Back of packaging (covers 40% of surface):
Pušači umiru mlađi (Smokers die younger)
Pušenje uzrokuje začepljenje arterija i uzrokuje srčani i moždani udar (Smoking clogs your arteries and causes heart attacks and strokes)
Pušenje uzrokuje smrtonosan rak pluća (Smoking causes lethal lung cancer)
Pušenje u trudnoći šteti vašem djetetu (Smoking while pregnant harms your child)
Zaštitite djecu od udisanja vašeg cigaretnog dima (Protect children from inhaling your cigarette smoke)
Vaš liječnik ili ljekarnik može vam pomoći prestati pušiti (Your doctor or pharmacist can help you stop smoking)
Pušenje stvara izrazitu ovisnost, nemojte ni počinjati (Smoking is highly addictive, don't even start)
Prestanak pušenja umanjuje rizik smrtnih srčanih ili plućnih bolesti (Quitting smoking reduces the risk of deadly heart and lung diseases)
Pušenje može izazvati polaganu i bolnu smrt (Smoking can cause a slow and painful death)
Potražite pomoć za prestanak pušenja /savjetujte se sa svojim liječnikom/ljekarnikom) (Get help to stop smoking /consult with your doctor/pharmacist)
Pušenje može usporiti krvotok i prouzročiti impotenciju (Smoking can slow down blood circulation and cause impotence)
Pušenje uzrokuje starenje kože (Smoking causes aging of the skin)
Pušenje može oštetiti spermu i smanjiti plodnost (Smoking can damage sperm and reduce fertility)
Dim sadrži benzene, nitrozamine, formaldehide i ugljikove cijanide (Smoke contains benzene, nitrosamines, formaldehyde and hydrogen cyanides*)
The last warning contains a mistranslation from Directive 2001/37/EC – "hydrogen" was translated as ugljik (carbon) instead of vodik. It was nevertheless signed into law and started appearing on cigarette packages in March 2009.
2004–2009
These warnings are also simple text warnings.
Front of packaging:
Pušenje šteti Vašem zdravlju (Smoking harms your health)
Back of packaging:
Pušenje uzrokuje rak (Smoking causes cancer)
Pušenje u trudnoći šteti i razvoju djeteta (Smoking while pregnant harms the child's development)
Pušenje uzrokuje srčani i moždani udar (Smoking causes heart attacks and strokes)
Pušenje skraćuje život (Smoking shortens your life)
Side of packaging:
Zabranjuje se prodaja osobama mlađim od 18 godina (Not for sale to persons under the age of 18)
1997–2004
Between 1997 and 2004, a simple text label warning Pušenje je štetno za zdravlje (Smoking is harmful to health) was used.
Cyprus
Front side
or
Rear
(Smokers die younger.)
(Smoking clogs the arteries and causes heart attacks and strokes.)
(Smoking causes terminal lung cancer.)
(Smoking during pregnancy may harm your baby.)
(Protect children: don't make them breathe your smoke.)
(Your doctor or your pharmacist can help you stop smoking.)
(Smoking is highly addictive, don't start.)
(Stopping smoking reduces the risk of fatal heart and lung diseases.)
(Smoking can cause a slow and painful death.)
(Smoking may reduce the blood flow and cause impotence.)
(Smoking causes ageing of the skin.)
(Smoking can damage the sperm and decreases fertility.)
(Smoke contains benzene, nitrosamines, formaldehyde and hydrogen cyanide.)
Czech Republic
Kouření vážně škodí Vám i lidem ve Vašem okolí. (Smoking seriously harms you and others around you.)
Kuřáci umírají předčasně. (Smokers die younger.)
Kouření ucpává tepny a způsobuje infarkt a mrtvici. (Smoking clogs the blood vessels and causes heart attacks and strokes.)
Kouření způsobuje smrtelnou rakovinu plic. (Smoking causes deadly lung cancer.)
Kouření je vysoce návykové, nezačínejte s ním. (Smoking is highly addictive, don't start.)
Přestat kouřit, znamená snížit riziko vzniku smrtelných onemocnění srdce a plic. (Quitting smoking reduces the risk of deadly heart and lung diseases.)
Kouření může způsobit pomalou a bolestivou smrt. (Smoking can cause a slow and painful death.)
Kouření způsobuje stárnutí kůže. (Smoking causes ageing of the skin.)
Kouření může poškodit sperma a snižuje plodnost. (Smoking can damage sperm and reduce fertility.)
Kouření může snižovat krevní oběh a způsobuje neplodnost. (Smoking can reduce blood flow and cause impotence.)
Kouř obsahuje benzen, nitrosaminy, formaldehyd a kyanovodík. (Smoke contains benzene, nitrosamines, formaldehyde and hydrogen cyanides.)
Kouření v těhotenství škodí zdraví Vašeho dítěte. (Smoking while pregnant harms your child.)
Chraňte děti: nenuťte je vdechovat Váš kouř. (Protect children: don't make them breathe your smoke.)
Váš lékař nebo lékárník Vám může pomoci přestat s kouřením. (Your doctor or pharmacist can help you stop smoking.)
Získejte pomoc při odvykání kouření. (Seek help to stop smoking.)
As of 7 December 2016, all packages must also include warning images additionally to text warnings. Also cigarette manufacturers are prohibited to display the content of nicotine, tar and carbon monoxide on cigarette packages, because it might mislead customers. The box previously containing the contents of the cigarette was replaced by a warning message: Tabákový kouř obsahuje přes 70 látek, které prokazatelně způsobují rakovinu. (Tobacco smoke contains over 70 substances, which provably cause cancer.)
Denmark
Warning texts in tobacco products, health warnings, which are reproduced on the packaging of cigarettes and other tobacco products. It is implemented in an effort to strengthen public knowledge about the dangers of smoking.
The order was introduced in Denmark on 31 December 1991. The Order was last revised on 2 October 2003, which also imposed ban on the words "light" and "mild" on Danish cigarette packages, as did European Union countries.
The marking shall appear on one third of the most visible part of the package.
(Smoking seriously harms you and others around you)
(Smokers die younger)
(Smoking clogs the arteries and causes heart attacks and strokes)
(Smoking causes fatal lung cancer)
(If you are pregnant, smoking damages your child's health)
(Protect children from tobacco smoke – they have the right to choose)
(Your doctor or your pharmacist can help you stop smoking)
(Smoking is highly addictive, do not start)
(Stopping smoking reduces the risk of fatal heart and lung diseases)
(Smoking can cause a slow and painful death)
(Get help to quit smoking: Telephone number 80313131)
(Smoking may reduce blood flow and causes impotence)
(Smoking causes aging of the skin)
(Smoking can damage sperm and reduce fertility)
(Smoke contains benzene, nitrosamines, formaldehyde and hydrogen cyanide)
For smokeless tobacco use above markings does not, whereas the label "" (This tobacco product can damage your health and is addictive) is always used for such products.
Estonia
General warning:
or
Finland
In Finland, warning signs are written in both Finnish and Swedish languages.
Tupakointi vahingoittaa vakavasti sinua ja ympärilläsi olevia / Rökning skadar allvarligt dig själv och personer i din omgivning (Smoking severely harms you and those around you)
Tupakoivat kuolevat nuorempina / Rökare dör i förtid (Smokers die younger)
Tupakointi tukkii verisuonet sekä aiheuttaa sydänkohtauksia ja aivoveritulppia / Rökning ger förträngningar i blodkärlen och orsakar hjärtinfarkt och stroke (Smoking clogs the arteries and causes heart attacks and strokes)
Tupakointi aiheuttaa keuhkosyöpää, joka johtaa kuolemaan / Rökning orsakar dödlig lungcancer (Smoking causes lung cancer, which leads to death / Smoking causes fatal lung cancer)
Tupakointi raskauden aikana vahingoittaa lastasi / Rökning under graviditeten skadar ditt barn (Smoking during pregnancy harms your child)
Suojele lapsia ― älä pakota heitä hengittämään tupakansavua / Skydda barnen ― låt dem inte andas in din tobaksrök (Protect children ― don't force them to breathe tobacco smoke / Protect children ― don't make them breathe your smoke)
Lääkäriltä tai apteekista saat apua tupakoinnin lopettamiseen / Din läkare eller ditt apotek kan hjälpa dig att sluta röka (You receive help to stop smoking from a doctor or a pharmacy / Your doctor or your pharmacist can help you stop smoking)
Tupakointi aiheuttaa voimakasta riippuvuutta. Älä aloita / Rökning är mycket beroendeframkallande. Börja inte rök (Smoking causes powerful addiction. Don't start / Smoking is highly addictive. Don't start)
Lopettamalla tupakoinnin vähennät vaaraa sairastua kuolemaan johtaviin sydän- ja keuhkosairauksiin / Om du slutar röka löper du mindre risk att få dödliga hjärt- och lungsjukdomar (By stopping smoking you reduce the risk of fatal heart and lung diseases / Stopping smoking reduces the risk of fatal heart and lung diseases)
Tupakointi voi aiheuttaa hitaan ja tuskallisen kuoleman / Rökning kan leda till en långsam och smärtsam död (Smoking can cause a slow and painful death)
Pyydä apua tupakoinnin lopettamiseen: puh. 0800 148 484 / Sök hjälp för att sluta röka: tfn 0800 148 484 (Request help to stop smoking: phone 0800 148 484 / Get help to stop smoking: phone 0800 148 484)
Tupakointi aiheuttaa impotenssia ja voi heikentää verenkiertoa / Rökning kan försämra blodflödet och orsakar impotens (Smoking causes impotence and may reduce the blood flow / Smoking may reduce the blood flow and cause impotence)
Tupakointi vanhentaa ihoa / Rökning får din hy att åldras (Smoking ages the skin / Smoking causes ageing of the skin)
Tupakointi voi vahingoittaa siittiöitä ja vähentää hedelmällisyyttä / Rökning kan skada sperman och minskar fruktsamheten (Smoking can damage the sperm and decrease fertility / Smoking can damage the sperm and decreases fertility)
Savu sisältää bentseeniä, nitrosamiineja, formaldehydiä ja vetysyanidia / Rök innehåller bensen, nitrosaminer, formaldehyd och cyanväte (Smoke contains benzene, nitrosamines, formaldehyde and hydrogen cyanide)
France
Before January 2017, France used regular EU warnings for tobacco products.
Front of packaging (covers 30% of surface)
or
Rear (covers 40% of surface, similar design)
(Smokers die prematurely.)
(Smoking clogs arteries and causes heart attacks and strokes.)
(Smoking causes fatal lung cancer.)
(Smoking during pregnancy harms your child's health.)
(Protect your children: don't make them breathe your smoke.)
(Your doctor or pharmacist can help you quit smoking.)
(Smoking is highly addictive, don't start.)
(Quitting smoking reduces the risk of fatal heart and lung diseases.)
(Smoking can result in a slow and painful death.)
(Help yourself quit smoking: call 0 825 309 310)
(Smoking can reduce the blood flow and causes impotence.)
(Smoking damages sperm and reduces fertility.)
)
Left or right side of packaging
Components percentages (15% of surface, small prints): Tobacco – Cigarette paper – Flavor and texture agents
ISO yields of toxics in mg/cigarette (in prominent black on white square and bold letters): Tar – Nicotin – Carbon monoxide
Other side of packaging
Country of manufacturing, name of manufacturer, quantity
Product identifier (EAN-7 bar code)
Other characteristics
Small print: "" (sales in France)
Recyclable logo (for the packaging)
Words forbidden in the displayed product name: light, ultra-light, légere, or any indication that may indicate that this is a minor drug with low impact... (These branding words have been replaced by various colour names)
Plain packaging has been regulated since January 2017.
Greece
(Smoking can kill)
(Smoking seriously harms the smoker and the ones around him.)
(Smoking seriously harms you and the ones around you.)
(Smokers die prematurely.)
(Smoking clogs the arteries and causes heart attacks and strokes.)
(Smoking causes fatal lung cancers.)
(Smoking during pregnancy can harm the health of your baby.)
(Protect children: do not force them to breathe in your smoke.)
(Your physician or pharmacist can assist you in quitting smoking.)
(Smoking is very addictive; don't start with it)
(Stopping smoking reduces the risk of fatal heart and lung diseases)
(Smoking can cause a slow and painful death.)
(Seek help to quit smoking: consult your doctor.)
(Smoking may reduce blood circulation and cause impotence)
(Smoking causes premature skin aging.)
(Smoking damages sperm and reduces fertility.)
(Cigarette smoke contains benzene, nitrosamines, formaldehyde and hydrogen cyanide.)
Hungary
(Smokers die earlier.)
(Smoking clogs arteries and causes heart diseases and strokes.)
(Smoking causes fatal lung cancer.)
(Smoking during pregnancy harms the baby.)
(Protect the kids, don't smoke in their presence.)
(Your doctor or your pharmacist may help you quit smoking.)
(Smoking is highly addictive, don't start.)
(Quitting smoking reduces the risk of deadly cardiovascular and lung diseases.)
(Smoking can cause a long and painful death.)
(Specialists in the medical profession may help you quit smoking.)
(Smoking may reduce blood circulation and cause impotency.)
(Smoking ages the skin.)
(Smoking may damage sperm and diminish fertility.)
(Cigarettes contain benzene, nistrosamine, formaldehyde and hydrocyanic acid.)
Ireland
Ireland currently follows EU standards (see above), but previously ran its own scheme, where one of 8 messages was placed on the pack, as defined in SI 326/1991.
After a High Court settlement in January 2008, it was accepted that the warnings on tobacco products must appear in all official languages of the state. As a result, the European Communities (Manufacture, Presentation
and Sale of Tobacco Products) (Amendment) Regulations 2008 were enacted. This states that tobacco products going to market after 30 September 2008 must carry warnings in Irish and English. A year-long transition period applied to products which were on the market prior to 1 October 2008, which may have been sold until 1 October 2009.
Each packet of tobacco products must carry:
One of the following general warnings (in both the Irish and English languages) which must cover at least 32% of the external side.
Toradh caithimh tobac – bás – Smoking kills
Déanann caitheamh tobac díobháil thromchúiseach duit agus do na daoine mórthimpeall ort – Smoking seriously harms you and others around you
AND One of the following additional warnings (in both the Irish and English languages) which must cover at least 45% of the external side.
Giorrú saoil tobac a chaitheamh – Smokers die younger
Nuair a chaitear tobac, tachtar na hartairí agus is é cúis le taomanna croí agus strócanna – Smoking clogs the arteries and causes heart attacks and strokes
Caitheamh tobac is cúis le hailse scamhóg mharfach – Smoking causes fatal lung cancer
Má chaitheann tú tobac le linn toirchis, déantar díobháil don leanbán – Smoking when pregnant harms your baby
Cosain leanaí: ná cuir iallach orthu do chuid deataigh an análú – Protect children: don't make them breathe your smoke
Féadann do dhochtúir nó do chógaiseoir cabhrú leat éirí as caitheamh tobac Your doctor or your pharmacist can help you stop smoking
Is éasca a bheith tugtha do chaitheamh tobac, ná tosaigh leis – Smoking is highly addictive, don't start
Má éiríonn tú as tobac a chaitheamh laghdaítear an riosca de ghalair mharfacha chroí agus scamhóg – Stopping smoking reduces the risk of fatal heart and lung diseases
Féadann caitheamh tobac bheith ina chúis le bás mall pianmhar – Smoking can cause a slow and painful death
Faigh cúnamh chun éirí as caitheamh tobac: Íosghlao Stoplíne 1850 201 203 – Get help to stop smoking: Callsave Quitline 1850 201 203
Féadfaidh caitheamh tobac imshruthúfola a laghdú agus bheith ina chúis le héagumas – Smoking may reduce the blood flow and cause impotence
Caitheamh tobac is cúis le críonadh craicinn – Smoking causes ageing of the skin
Féadann caitheamh tobac dochar a dhéanamh don speirm agus laghdaíonn sé torthúlacht – Smoking can damage the sperm and decreases fertility
Cuimsíonn deatach beinséin, nítreasaimíní, formaildéad agus ciainíd hidrigine – Smoke contains benzene, nitrosamines, formaldehyde and hydrogen cyanide
In the case of Smokeless Tobacco Products, only the following warning must be displayed:
Féadann an táirge tobac seo dochar a dhéanamh do shláinte agus is táirge andúile é – This tobacco product can damage your health and is addictive
Italy
/ (Smoking kills / Smoking may kill)
(Smoking heavily damages you and whoever is near you)
(Smokers die early.)
(Smoking clogs arteries and causes heart diseases and strokes)
(Smoking causes fatal lung cancer.)
(Smoking during pregnancy harms the baby.)
(Protect the kids, don't smoke in their presence.)
(Your doctor or your pharmacist may help you quit smoking.)
(Specialists in the medical profession may help you quit smoking.)
(Smoking is highly addictive, don't start.)
(Quitting smoking reduces the risk of deadly cardiovascular and lung diseases.)
(Smoking can cause a long and painful death.)
(Smoking causes oral cancer.)
(Let [someone] help you to quit smoking.)
(Smoking may reduce blood circulation and cause impotency.)
(Smoking ages the skin.)
(Smoking may damage sperm and diminish fertility.)
(Cigarettes contain benzene, nistrosamine, formaldehyde and hydrocyanic acid)
Il fumo può portare alla cecità. (Smoking can lead to blindness)
Other text is sometimes placed in the packets, for example some packets contain leaflets which have all the above warnings written on them, with more detailed explanations and reasons to give up, and advice from Philip Morris.
Latvia
(Smoking causes 9 out of 10 lung cancers )
(Smoking causes mouth and throat cancer)
(Smoking damages your lungs)
(Smoking causes heart attacks)
(Smoking causes strokes and disability)
(Smoking clogs your arteries )
(Smoking increases the risk of blindness )
(Smoking can cause teeth damages and illnesses of gums)
(Smoking can kill your unborn child )
(Your smoke harms your children, family and friends )
(Smokers' children are more likely to start smoking)
( Quit smoking now – stay alive for those close to you)
(Smoking reduces fertility)
(Smoking increases the risk of impotence)
Lithuania
General warning:
or
Malta
– (DANGER – Health Department – Warning)
– (Smoking kills)
– (Smokers die younger)
– (Smoking clogs the arteries and causes heart attacks and strokes)
– (Smoking causes fatal lung cancers)
– (Smoking when pregnant harms your baby)
– (Protect children: don't make them breathe your smoke)
– (Your doctor and your pharmacist can help you stop smoking)
– (Smoking is highly addictive, don't start)
– (Stopping smoking reduces the risk of fatal heart and lung diseases)
– (Smoking can cause a slow and painful death)
– 21231247 – (Get help to stop smoking – 21231247)
– (Smoking may reduce the blood flow and causes impotence)
– (Smoking causes aging of the skin)
– (Smoking can damage the sperm and decrease fertility)
– (Smokes contains benzene, nitrosamines, formaldehyde and hydrogen cyanide)
The Netherlands
(Quit smoking, stay alive for your family and friends)
(Smoking is lethal)
(Smoking causes impotence)
(Smoking causes deadly lung cancer)
(Tobacco smoke contains benzene, nitrosamines, formaldehyde and hydrogencyanide)
(Your physician or pharmacist can help you to quit smoking)
(Smokers die younger)
(Smoking causes clogging of the blood vessels, heart attacks and strokes)
(Smoking during pregnancy harms your baby)
(Smoking can kill your unborn child)
(Protect children: don't let them inhale your smoke)
(Smoking is very addictive; prevent yourself from starting)
(Quitting smoking reduces the risk of fatal heart and lung diseases)
(Smoking may result in a slow, painful death)
(Smoking may reduce the blood flow and causes impotence)
(Smoking ages your skin)
(Smoking can damage the sperm and reduces fertility)
(Smoking causes serious damage to you and those around you)
(Your physician or pharmacist can help you to quit smoking, call 0800-555532340 now.)
Poland
Front of packaging (covers 30% of surface):
or
There are also warnings on the back of every packet:
Palacze tytoniu umierają młodziej (Smokers die younger)
Palenie tytoniu zamyka naczynia krwionośne i jest przyczyną zawałów serca i udarów mózgu (Smoking causes clogging of the blood vessels, heart attacks and strokes)
Palenie tytoniu powoduje śmiertelnego raka płuc (Smoking causes fatal lung cancer)
Palenie tytoniu w czasie ciąży szkodzi Twojemu dziecku (Smoking while pregnant harms your baby)
Chrońcie dzieci – nie zmuszajcie ich do wdychania dymu tytoniowego (Protect children: don't let them breath your smoke)
Twój lekarz lub farmaceuta pomoże Ci rzucić palenie (Your doctor or your pharmacist can help you quit smoking)
Palenie tytoniu silnie uzależnia – nie zaczynaj palić (Smoking is highly addictive; do not start smoking)
Zaprzestanie palenia zmniejsza ryzyko groźnych chorób serca i płuc (Stopping smoking reduces the risk of serious heart and lung disease)
Palenie tytoniu może spowodować powolną i bolesną śmierć (Smoking can cause a slow and painful death)
Dzwoniąc pod nr telefonu 0801108108, uzyskasz pomoc w rzuceniu palenia (Get help to quit smoking by calling 0801108108)
Palenie tytoniu może zmniejszyć przepływ krwi i powodować impotencję (Smoking can reduce blood flow and cause impotence)
Palenie tytoniu przyśpiesza starzenie się skóry (Smoking accelerates skin aging)
Palenie tytoniu może uszkodzić nasienie i zmniejszać płodność (Smoking can damage sperm and reduce fertility)
Dym tytoniowy zawiera benzen, nitrozoaminy, formaldehyd i cyjanowodór (Tobacco smoke contains benzene, nitrosamines, formaldehyde and hydrogen cyanide)
Portugal
(Smoking kills)
(Smoking causes deadly lung cancer)
(Smoking causes serious addiction. Don't start smoking.)
(Smoking causes skin aging)
(Smoking seriously harms your health and the health of those around you)
(If you're pregnant: Smoking harms your child's health)
(Smokers die prematurely)
(Smoking clogs your arteries and causes heart attacks and strokes)
(Smoking might reduce blood flow and causes impotence)
(Quitting reduces the risks of deadly cardiovascular and pulmonar diseases)
Romania
General warning (on the front of cigarette packages, covering at least 40% of the area):
(Smoking kills)
(Smoking can kill)
(Smoking seriously harms you and those around);
Additional warnings (on the back of cigarette packages, covering at least 50% of the area):
(Smokers die younger.)
(Smoking clogs the arteries and causes heart attacks and strokes)
(Smoking causes lung cancer, which is lethal.)
(Smoking when pregnant harms your baby.)
(Protect children: don't let them breathe your smoke!)
(Your doctor or pharmacist can help you quit smoking)
(Smoking is addictive, don't start smoking!)
(Stopping smoking reduces the risk of fatal heart and lung diseases)
(Smoking can cause a slow and painful death.)
(Get help to stop smoking: telephone/postal address/Internet address/consult your doctor/pharmacist...)
(Smoking reduces blood circulation and causes impotence)
(Smoking causes skin aging.)
(Smoking can affect sperm quality and decreases fertility.)
(Cigarette smoke contains benzene, nitrosamines, formaldehyde and hydrogen cyanide.)
Slovenia
Front of packaging (covers 30% of surface)
or
Rear of packaging (covers 40% of surface)
(Smokers die younger.)
(Smoking clogs arteries and causes heart attacks and strokes.)
(Smoking causes fatal lung cancer.)
(Smoking during pregnancy harms your child.)
(Protect your children from breathing your cigarette smoke.)
(Your doctor or pharmacist can help you quit smoking.)
(Smoking is highly addictive, don't start.)
(Quitting smoking reduces the risk of fatal heart and lung diseases.)
(Smoking can result in a slow and painful death.)
(Seek help to quit smoking: consult your doctor.)
(Smoking can reduce blood circulation and cause impotence.)
(Smoking causes ageing of the skin.)
(Smoking damages sperm and reduces fertility.)
)
Spain
In Spain, cigarette packages are preceded by warnings on both sides of the package marked "Las Autoridades Sanitarias advierten" (Health authorities warn), written in black and white above the black part of the standard warning.
or
Front of cigarette packages
(Smoking can kill) (changed in 2010 to "Fumar mata" <Smoking kills>)
(Smoking seriously harms to your health and that of others)
Back of cigarette packages
(Smoking shortens life)
(Smoking causes fatal lung cancer)
(Tobacco is very addictive, do not start)
(Smoking clogs the arteries and causes heart disease and stroke)
(Smoking causes skin aging)
(Help to stop smoking: consult your doctor or pharmacist)
(Smoking may reduce blood flow and causes impotence)
Sweden
General warnings on all Swedish cigarette packagings have been in force since 1977.
Front of cigarette packages
(Smoking kills.)
(Tobacco causes serious harm to your health.)
(Smoking while pregnant may harm your fetus.)
(Smoking can lead to a slow and painful death.)
(Smoking can impair the blood flow and causes impotence.)
(Smoking seriously hurts you and people around you.)
(Smoke contains benzene, nitrosamines, formaldehyde and hydrogen cyanide.)
(Protect the children. Don't let them inhale your tobacco smoke.)
Back of cigarette packages
(Smoking may damage the sperm and reduce fertility.)
(Smoking is very addictive. Do not start smoking.)
Rear side of snus packaging
(This tobacco product might cause harm to your health and is addictive.)
Georgia
General warning:
Ghana
Ghanaian warnings are very compliant with the EU's legislations, as follows:
Packaging 1 (same as in the newer UK packaging):
Smoking seriously harms you and others around you
Stopping smoking reduces the risk of fatal heart and lung diseases
Packaging 2 (same as in the older UK packaging):
Smoking causes cancer
Smoking damages the health of those around you
Packaging 3 (same as in the older UK packaging):
Smoking causes fatal diseases
Smokers die younger
Honduras
In Honduras, a variety of warnings with graphic, disturbing images of tobacco-related harms (including a lung cancer and throat cancer) are placed prominently on cigarette packages.
Hong Kong
Under Hong Kong Law, Chap 371B Smoking (Public Health) (Notices) Order, packaging must indicate the amount of nicotine and tar that is present in cigarette boxes in addition to graphics depicting different health problems caused by smoking in the size and ratio as prescribed by law. The warnings are to be published in both official languages, Traditional Chinese and English.
Warning begins with the phrase ' HONG KONG SAR GOVERNMENT WARNING' and then one of the following in all caps.
Smoking Causes Lung Cancer
Smoking Kills
Smoking Harms Your Family
Smoking Causes Peripheral Vascular Diseases
Smoking May Cause Impotence
Smoking Can Accelerate Aging of Skin
In addition, any print advertisement must give minimum 85% coverage of the following warnings:
HKSAR GOVERNMENT HEALTH WARNING
January -February SMOKING KILLS
March- April SMOKING CAUSES CANCER
May- June SMOKING CAUSES HEART DISEASE
July- August SMOKING CAUSES LUNG CANCER
September- October SMOKING CAUSES RESPIRATORY DISEASES
November – December SMOKING HARMS YOUR CHILDREN
Iceland
All cigarette packets and other tobacco packaging in Iceland must include warnings in the same size and format as in the European Union and using the same approved texts in Icelandic.
(Your doctor or pharmacist can help you quit smoking)
(Smoking causes cancer)
(Smoking is very harmful to you and those close to you)
(Smoking blocks your arteries and causes coronary artery disease and stroke.)
(Smoking causes fatal lung cancer)
)
(Protect the children – Don't let them inhale tobacco smoke.)
India
Cigarette packets sold in India are required to carry graphical and textual health warnings. The warning must cover at least 85% of the surface of the pack, of which 60% must be pictorial and the remaining 25% contains textual warnings in English, Hindi or any other Indian language.
In 2003, India ratified the World Health Organisation's Framework Convention on Tobacco Control, which includes a recommendation for large, clear health warnings on tobacco packages. However, there was a delay in implementing graphic warning labels.
Until 2008, cigarette packets sold in India were required to carry a written warning on the front of the packet with the text CIGARETTE SMOKING IS INJURIOUS TO HEALTH in English. Paan, gutkha and tobacco packets carried the warning TOBACCO IS INJURIOUS TO HEALTH in Hindi and English. The law later changed. According to the new law, cigarette packets were required to carry pictorial warnings of a skull or scorpion along with the text SMOKING KILLS and TOBACCO CAUSES MOUTH CANCER in both Hindi and English.
The Cigarette and Other Tobacco Products (Packaging and Labelling) Rules 2008 requiring graphic health warnings came into force on 31 May 2008. Under the law, all tobacco products were required to display graphic pictures, such as pictures of diseased lungs, and the text SMOKING KILLS or TOBACCO KILLS in English, covering at least 40% of the front of the pack, and retailers must display the cigarette packs in such a way that the pictures on pack are clearly visible. In January 2012 controversy arose when it was discovered an image of English footballer John Terry was used on a warning label.
On 15 October 2014, Union Health Minister Harsh Vardhan announced that only 15% of the surface of a pack of cigarettes could contain branding, and that the rest must be used for graphic and text health warnings. The Union Ministry of Health amended the Cigarettes and Other Tobacco Products (Packaging and Labelling) Rules, 2008 to enforce the changes effective from 1 April 2015.
However, the government decision to increase pictorial warnings on tobacco packets from 1 April, was put on hold indefinitely, following the recommendations of a Parliamentary committee, which reportedly did not speak to health experts, but only spoke to tobacco lobby representatives. On 5 April 2016, the health ministry ordered government agencies to enforce this new rule.
Various warnings on cigarette packets:
धुम्रपान से गले का कैंसर होता है (Hindi)
Smoking causes throat cancer
TOBACCO CAUSES PAINFUL DEATH
SMOKING KILLS: Tobacco causes cancer
SMOKING KILLS: Tobacco causes slow and painful death
Add Image of cigarette's packet here
Following the intervention by the Parliamentary committee, NGO Health of Millions represented by Prashant Bhushan filed a petition in Supreme Court of India, which asks the government to stop selling of loose cigarettes and publish bigger health warnings on tobacco packs.
Indonesia
Tobacco warnings are not just placed on packagings, but also in cigarette advertisements, where such advertisements are banned in most countries.
Until December 1999
1999–2001
Other versions
PERINGATAN PEMERINTAH: MEROKOK DAPAT MENYEBABKAN KANKER DAN IMPOTENSI. (Government warning: smoking can cause cancer and impotence)
2002–end of 2013
The last recorded usage of this warning in TV advertisements is on late February 2014 (Esse Mild advertisement).
2014–2018
With the enforcement of 2012 Indonesian Government Regulation No. 109, all of tobacco products/cigarette packaging and advertisement should include warning images and age restriction (18+). Graphic warning messages must cover 40% of cigarette packages. After the introduction of graphic images in Indonesian cigarette packaging, the branding of cigarettes as "light", "mild", "filter", etc. is forbidden, except for brands that already use some words above such as L.A. Lights, A Mild, or Dunhill Filter.
The last advertisement to use this warning, however, is the 2021 advertisement of Djarum Super, before subsequently changed to the 2018 warning.
Other alternatives:
PERINGATAN: MEROKOK SEBABKAN KANKER MULUT. (Warning: smoking causes mouth cancer)
PERINGATAN: MEROKOK SEBABKAN KANKER TENGGOROKAN. (Warning: smoking causes throat cancer)
PERINGATAN: MEROKOK DEKAT ANAK BERBAHAYA BAGI MEREKA. (Warning: smoking endangers kids near you)
PERINGATAN: MEROKOK SEBABKAN KANKER PARU-PARU DAN BRONKITIS KRONIS. (Warning: smoking causes lung cancer and chronic bronchitis)
The warning below appears on the side of the cigarette packaging:
DILARANG MENJUAL/MEMBERI PADA ANAK USIA DI BAWAH 18 TAHUN DAN PEREMPUAN HAMIL. (Do not sell or give [this product] to children under 18 years old and pregnant mothers)
2018–
Because all pictorial health warnings (PHW) used in Indonesia originally came from the 2005 version of PHWs in Thailand, on 31 May 2018, the Ministry of Health launched new PHWs, of which two depicts Indonesian smokers and one depicted a Venezuelan smoker.
Other alternatives:
PERINGATAN: MEROKOK SEBABKAN KANKER MULUT. (Warning: smoking causes mouth cancer)
PERINGATAN: MEROKOK SEBABKAN KANKER PARU. (Warning: smoking causes lung cancer)
PERINGATAN: ROKOK MERENGGUT KEBAHAGIAAN SAYA SATU PERSATU. (Warning: cigarette takes my happiness out one by one)
PERINGATAN: MEROKOK SEBABKAN KANKER TENGGOROKAN. (Warning: smoking causes throat cancer)
LAYANAN BERHENTI MEROKOK: 0800-177-6565 (Smoking quitline: 0800-177-6565)
Iran
In Iran, a variety of warnings with graphic, disturbing images of tobacco-related harms (including a lung cancer and mouth cancer) are placed prominently on cigarette packages.
Japan
Japan became the first country in Asia to display a general warning on cigarette packaging in 1972.
Prior to 2005, there was only one warning on all Japanese cigarette packages.
(For the good of your health, be careful not to smoke too much) (1972–1989)
(Be careful not to smoke too much, as there is a risk of damaging your health) (1990–2005)
Since 2005, more than one general warning is printed on cigarette packaging.
On the front of cigarette packages:
(Smoking is a cause of lung cancer. According to epidemiological estimates, smokers are about two to four times more likely than non-smokers to die of lung cancer.)
(Smoking increases risk of myocardial infarction. According to epidemiological estimates, smokers are about 1.7 times more likely than non-smokers to die of a heart attack.)
(Smoking increases risk of stroke. According to epidemiological estimates, smokers are about 1.7 times more likely than non-smokers to die of a stroke.)
(Smoking can aggravate the symptoms of emphysema.)
On the back of cigarette packages:
(Smoking during pregnancy is a cause of preterm delivery and impaired fetal growth. According to epidemiological estimates, pregnant women who smoke have almost double the risk of low birth weight and three times the risk of premature birth than pregnant women who do not smoke. (For more information, please visit the Ministry of Health home page at www.mhlw.go.jp/topics/tobacco/main.html.))
(Tobacco smoke adversely affects the health of people around you, especially infants, children and the elderly. When smoking, be careful not to inconvenience others.)
(The degree may differ from person to person, but nicotine [in cigarettes] causes addiction to smoking.)
(Smoking while underage heightens the addiction and damage to health caused by cigarettes. Never smoke, even if encouraged to by those around you.)
Laos
In Laos, a variety of warnings with graphic, disturbing images of tobacco-related harms (including a mouth cancer and rotting teeth) are placed prominently on cigarette packages.
Malaysia
In Malaysia, general warning as a mandatory on all Malaysian cigarette packaging are in force since June 1976.
(Warning by the government of Malaysia, Smoking endangers health)
Starting 1 June 2009, The Malaysian government has decided to place graphic images on the cigarette packs to show the adverse long-term effects of excessive smoking replacing the general warning with text describing the graphic images printed in Malay (front) and English (back) explaining:
"Rokok penyebab ..."
"Cigarette causes ..."
Graphic warning messages must consist 40% of the front of cigarette packages and 60% in the back. After the introduction of graphic images in Malaysian cigarette packaging, the branding of cigarettes as "light", "mild", etc. is forbidden.
Mexico
In Mexico cigarette packs contain health warnings and graphic images since 2010. By law, 30% of the pack's front, 100% of the pack's rear, and 100% of one lateral must consist on images and warnings. The Secretariat of Health issues new warnings and images every six months. Images have included a dead rat, a partial mastectomy, a laryngectomy, a dead human fetus surrounded by cigarette butts, a woman being fed after suffering a stroke, and damaged lungs amongst others.
Warnings include smoking-related diseases and statistics, toxins found in cigarettes and others such as:
Smoking kills
Your baby can die (appealing to pregnant women)
You will have a slow and painful death
By smoking you are hurting your family
Go ahead, shorten your life
Smoking damages your arteries
Mexico became the first country to put a warning on cartons of cigarettes that tobacco use could increase in the risk of COVID-19 infection.
Moldova
General warning (on the front of cigarette packages, covering at least 30% of the area, Helvetica font):
"Fumatul ucide" (smoking kills) or
"Fumatul dăunează grav sănătăţii dumneavoastră şi a celor din jur" (smoking seriously harms you and those around);
Additional warnings (on the back of cigarette packages, covering at least 40% of the area, Helvetica font):
"Fumătorii mor mai tineri" (smokers die younger);
"Fumatul blochează arterele şi provoacă infarct miocardic şi accident vascular cerebral" (Smoking clogs the arteries and causes heart attacks and strokes);
"Fumatul conduce la moarte de cancer pulmonar" (smoking causes death from lung cancer);
"Fumatul în timpul sarcinii dăunează copilului dumneavoastră" (Smoking while pregnant harm your baby);
"Protejaţi copiii dumneavoastră de inspirarea fumului de ţigaretă" (Protect your children from breathing in the cigarette smoke);
"Psihologul, profesorul sau medicul vă poate ajuta să renunţaţi la fumat" (Psychologists, Teachers, and Doctors can help you quit smoking);
"Fumatul creează dependenţă rapidă, nu încercaţi să fumaţi" (Smoking becomes addictive fast, try not to smoke);
"Abandonarea fumatului reduce riscul de îmbolnăviri cardiace sau pulmonare fatale";
"Fumatul poate provoca o moarte lentă şi dureroasă" (Smoking can cause a slow and painful death);
"Fumatul reduce circulaţia sîngelui şi provoacă impotenţă" (Smoking reduces blood circulation and increases impotency);
"Fumatul provoacă îmbolnăvirea tenului (pielei)" (Smoking causes skin diseases);
"Fumatul creează grave disfuncţii sexuale" (Smoking causes serious sexual dysfunction);
Regulated by "LEGE cu privire la tutun şi la articolele din tutun" (Law on tobacco and tobacco articles) nr. 278-XVI from 14.12.2007 enabled at 07.03.2008
Cigarette packets in Transnistria have variable warning labels, depending from where they come from (English, Russian, etc.)
Mongolia
In Mongolia, a variety of warnings with graphic, disturbing images of tobacco-related harms (including a heart disease and lung cancer) are placed prominently on cigarette packages.
Montenegro
In Montenegro, a variety of warnings with graphic, disturbing images of tobacco-related harms (including a mouth cancer and lung cancer) are placed prominently on cigarette packages.
Myanmar
In Myanmar, a variety of warnings with graphic, disturbing images of tobacco-related harms (including a heart attack and mouth cancer) are placed prominently on cigarette packages.
Nepal
In Nepal, a variety of warnings with graphic, disturbing images of tobacco-related harms (including a lung cancer and mouth cancer) are placed prominently on cigarette packages.
New Zealand
The first health warnings appeared on cigarette packets in New Zealand in 1974. Warning images accompanying text have been required to appear on each packet since 28 February 2008. New Regulations were made on 14 March 2018 which provided for larger warnings and a new schedule of images and messages.
By law, 75% of the cigarette pack's front and 100% of the cigarette pack's rear must consist of warning messages. Images include gangrenous toes, rotting teeth and gums, diseased lungs and smoking-damaged hearts. Cigarette packets also carry the Quitline logo and phone number and other information about quitting smoking.
In total, there are 15 different warnings. A full list with pictures is available at the New Zealand Ministry of Health's website. Warning messages are rotated annually. Following is a list of the warnings in English and Māori.
Smoking causes heart attacks, KA PĀ MAI NGĀ MANAWA-HĒ I TE KAI PAIPA
Smoking causes over 80% of lung cancers, NEKE ATU I TE 80% O NGĀ MATE PUKUPUKU KI NGĀ PŪKAHUKAHU I AHU MAI I TE KAI PAIPA
Smoking harms your baby before it is born, KA TŪKINOHIA TŌ PĒPI I TO KŌPŪ I TE KAI PAIPA
Your smoking harms others, KA TŪKINOHIA ĒTAHI ATU I Ō MAHI KAI PAIPA
Smoking is a major cause of stroke, KA PIKI AKE I TE KAI PAIPA TŌ TŪPONO KI TE IKURA RORO
Smoking damages your blood vessels, KA TŪKINOHIA Ō IA TOTO I TE KAI PAIPA
Smoking is not attractive, KA ANUANU KOE I TE KAI PAIPA
Smoking causes heart attacks, KA PĀ MAI NGĀ MANAWA-HĒ I TE KAI PAIPA
Smoking causes lung cancer, KA PĀ MAI TE MATE PUKUPUKU KI NGĀ PŪKAHUKAHU I TE KAI PAIPA
Smoking when pregnant harms your baby, KA TŪKINOHIA TŌ PĒPI I TE KAI PAIPA I A KOE E HAPŪ ANA
Your smoking harms children, KA TŪKINOHIA NGĀ TAMARIKI I Ō MAHI KAI PAIPA
Smoking is a major cause of stroke, KA PIKI AKE I TE KAI PAIPA TŌ TŪPONO KI TE IKURA RORO
Quit before it is too late, ME WHAKAMUTU KEI RIRO KOE
Smoking causes gum disease and stinking breath, KA PĀ TE MATE PŪNIHO, KA HAUNGA TŌ HĀ I TE KAI PAIPA
Nigeria
There are two versions of general warnings, as follows:
Smoking is addictive
Smoking damages lungs
Smoking can kill
Smoking can cause cancer
Smoking can damage the fetus.
From 2013 onward, there is a warning:
North Korea
North Korea signed the WHO Framework Convention on Tobacco Control on 17 June 2003 and ratified it on 27 April 2005. Tobacco packaging warning messages are required on all types of packaging, but their appearance is not regulated in any way. They are usually printed in small print on the side of the package and only state that smoking is harmful to health. However, the descriptions must state the nicotine and tar content, must not be misleading and do need to be approved by local authorities. Graphic warning images that are now common worldwide have never appeared on packaging in North Korea.
Norway
Norway have had general warnings on cigarette packets since 1975. Norway's warnings of today were introduced in 2003 and are in line with EU's legislation, as Norway is an EEA member:
On the front of cigarette and cigar packages, covering about 30% of the area:
(Smoking kills)
(Smoking is very harmful to you and your surroundings)
On the back of cigarette and cigar packages, covering about 45% of the area:
(Smoking causes fatal lung cancer)
(If you stop smoking, you will reduce the risk of fatal heart and lung diseases)
(Smoking causes early ageing of the skin)
(Smoking may reduce the blood flow and cause impotence)
(Smoking can cause a slow and painful death)
(Smoking can reduce the sperm quality and decrease fertility)
(Protect children against tobacco smoke, don't let them inhale your smoke)
(Smoking is highly addictive, don't start smoking)
(Smoking clogs the arteries and causes heart attacks and strokes)
(Your doctor or your pharmacy can help you stop smoking)
(Get help to stop smoking – call the Quitline: 800 400 85)
(Smoking lowers life expectancy)
(Smoking during pregnancy harms the child)
(Smoke contains benzene, nitrosamines, formaldehyde and hydrogen cyanide)
Tobacco products like snus and chewing tobacco have the following warning printed on them:
(This tobacco product may be a health hazard, and is addictive)
Pakistan
All cigarettes are required by a Statutory Order 1219(I)/2008 dated 25 September 2008, published in the Gazette of Pakistan dated 24 November 2008, to carry rotational health warnings from 1 July 2009. Under the previous law, health warnings were not required to be rotated.
Each health warning will be printed for a period of 6 months. The health warnings are to be in Urdu and in English. Here are the English versions:
1. WARNING: Protect children. Do not let them breathe your smoke. Ministry of Health.
2. WARNING: Smoking causes mouth and throat cancer. Ministry of Health.
3. WARNING: Quit smoking; live longer life. Ministry of Health.
4. WARNING: Smoking severely harms you and the people around you. Ministry of Health.
The warnings shall cover at least 30% on both sides of the packet, and located at the top portions of the face (in Urdu) and back (in English) of the packet.
Panama
In Panama, a variety of warnings with graphic, disturbing images of tobacco-related harms (including a throat cancer and lung cancer) are placed prominently on cigarette packages.
Paraguay
In Paraguay, a variety of warnings with graphic, disturbing images of tobacco-related harms (including impotence and heart attack) are placed prominently on cigarette packages.
Peru
In Peru, a variety of warnings with graphic, disturbing images of tobacco-related harms (including abortions and asthma) are placed prominently on cigarette packages.
Philippines
All cigarette packaging sold in Philippines are required to display a government warning label. The warnings include:
Government Warning: Cigarette smoking is dangerous to your health.
Government Warning: Cigarettes are addictive.
Government Warning: Tobacco smoke can harm your children.
Government Warning: Smoking kills.
In July 2014, Philippine President Benigno Aquino III signed the Republic Act 10643, or "An Act to Effectively Instill Health Consciousness through Graphic Health Warnings on Tobacco Products", more known as the "Graphic Health Warning Act." This law requires tobacco product packaging to display pictures of the ill effects of smoking, occupying the bottom half of the display area in both front and the back side of the packaging. On 3 March 2016, Department of Health (DOH) secretary Janette Garin started the implementation of Republic Act 10643, requiring tobacco manufacturers to include graphic health warnings on newer cigarette packaging.
With the Graphic Health Warning Act implemented, graphic health warnings are used on all newer cigarette packaging, and older packages using text-only warnings are required to be replaced by newer packaging incorporating graphic warnings. The 12 new warnings, showing photos of negative effects of smoking, like mouth cancer, impotence, and gangrene, are rotated every month, and on 3 November 2016, all cigarette packaging without graphic health warning messages are banned from sale. Labeling of cigarettes with "light" or "mild" is also forbidden by the Graphic Health Warning Act.
Russia
Warning messages on Russian cigarette packets revised in 2013, falling in line with European Union standards.
Note: 12 different variants.
Serbia
The warning messages on Serbian cigarette packets are visually similar to rest in European Union countries, but the texts used in Serbia are not translated from EU-approved texts.
Singapore
Text warnings were first added on cigarette packets. They used blunt, straight-to-the-point messages such as 'Smoking causes lung cancer'. They were later replaced by graphic warnings in August 2004. They featured gory pictures and were printed with the messages:
Smoking causes a slow painful death
Smoking harms your family
Tobacco smoke can kill babies
Smoking causes stroke
Smoking causes lung cancer
Smoking causes mouth diseases
In 2016, the images and warnings were revised, with images focusing on damaged organs. The following warnings shows what is printed nowadays.
Smoking causes mouth diseases
Smoking can cause a slow and painful death
Smoking causes lung cancer
Smoking causes gangrene
Smoking causes neck cancer
Smoking harms your family
Smoking causes 92% of oral cancer
From 1 January 2009, people possessing cigarettes without the SDPC (Singapore Duty Paid Cigarettes) label will be committing an offence under the Customs and GST Acts. The law was passed to distinguish non-duty paid, contraband cigarettes from duty-paid ones.
Switzerland
Switzerland has four official languages, but only has warning messages in three languages. The fourth language, Romansh, is only spoken by 0.5% of the population, and those persons typically also speak either German or Italian. The three warning messages below are posted on cigarette packets, cartons and advertisements such as outdoor billboard posters:
Fumer tue. (Smoking kills.)
Rauchen tötet. (Smoking kills.)
Il Fumo uccide. (Smoking kills.)
Somalia
A small warning, in Somali and English, appears on British American Tobacco brands, Royals and Pall Mall.
South Africa
In South Africa, the Tobacco Products Control Act, 1993 and its amendments (1999, 2007, 2009), stipulate that a warning related to the harmful effects (health, social, or economic) of tobacco smoking, or the beneficial effects of cessation, must be placed prominently on tobacco products covering 15% of the obverse, 25% of the reverse and 20% of the sides of packing.
According to the draft Control of Tobacco Products and Electronic Delivery Systems Bill, 2018, new legislation, once enacted, will require uniform, plain colored packaging (branding and logos prohibited) containing the brand and product name in a standard typeface and color, a warning related to the harmful effects of tobacco smoking, or beneficial effects of cessation, and a graphic image of tobacco-related harm.
South Korea
In South Korea, general warnings on cigarette packaging have been used since 1976. The warning messages used since then have been:
From 1976 to 1989 (For your health, please refrain from smoking too much)
From December 1989 to 1996 (Smoking may cause lung cancer and it is especially dangerous for teenagers and pregnant women)
From 1996 to March 2005 Front (Smoking causes lung cancer and other diseases and it is especially dangerous for teenagers and pregnant women) Back (It is illegal to sell cigarettes to people under 19) and additionally, (You can be healthy and live longer if you quit), (Smoking also causes paralysis and heart diseases), (Smoking also damages your beloved children), (Smoking damages others)
From April 2005 to April 2007 Front 건강을 해치는 담배 그래도 피우시겠습니까? (Smoking damages your health. Do you still want to smoke?) Back 19세 미만 청소년에게 판매할 수 없습니다 (It is illegal to sell cigarettes to people under 19) and additionally, 금연하면 건강해지고 장수할 수 있습니다 (You can be healthy and live longer if you quit), 흡연은 중풍과 심장병도 일으킵니다 (Smoking also causes paralysis and heart diseases), 흡연은 사랑하는 자녀의 건강도 해칩니다 (Smoking also damages your beloved children), 당신이 흡연하면 다른 사람의 건강도 해칩니다 (Smoking damages others)
From April 2007 to April 2009 Front 흡연은 폐암 등 각종 질병의 원인이 되며, 특히 임신부와 청소년의 건강에 해롭습니다 (Smoking causes lung cancer and other diseases and it is especially dangerous for teenagers and pregnant women) Back 19세 미만 청소년에게 판매 금지! 당신 자녀의 건강을 해칩니다" (It is illegal to sell cigarettes to people under 19! It hurts your children's health)
From April 2009 to April 2011 (a prospectus) Front (Smoking damages your health. Once you start smoking, it is very difficult to quit) Back (It is illegal to sell cigarettes to people under 19! It hurts your children's health)
From December 2016, 50% of cigarette packages must contain warning elements, of which 30% must be graphic or photographic warnings. In addition to the existing warning (Smoking can be a cause of disease), the following warning will be mandatory: (Smoking can harm another person's health)
Sri Lanka
In Sri Lanka, a variety of warnings with graphic, disturbing images of tobacco-related harms (including a cancer and heart attack) are placed prominently on cigarette packages.
Taiwan
The warnings in Taiwan are led by the phrase "行政院衛生署警告" (Warning from the Department of Health, Executive Yuan:), and followed by one of the following warnings:
吸菸有害健康
Smoking is hazardous to your health
孕婦吸菸易導致胎兒早產及體重不足
Smoking during pregnancy can cause premature death and underweight birth
抽菸會導致肺癌﹑心臟病﹑氣腫及與懷孕有關的問題
Smoking can cause lung cancer, heart diseases, emphysema and pregnancy-related problems
吸菸害人害己
Smoking hurts yourself, and hurts others
懷孕婦女吸菸可能傷害胎兒,導致早產及體重不足
Smoking during pregnancy might damage the fetus, and can cause premature death and underweight birth
戒菸可減少健康的危害
Quitting smoking can reduce health risk – no longer us
Due to the Department of Health was reorganized into Ministry of Health and Welfare, the images and warnings were revised in 2014. The following warnings shows what is printed (the new warnings will use on 1 June 2014).
吸菸導致皮膚老化
Smoking causes ageing of skin
菸癮困你一生
Tobacco addiction traps your life
吸菸會導致性功能障礙
Smoking causes sexual dysfunction
菸害導致胎兒異常及早產
Smoking and second hand smoking causes abnormality of baby or premature birth
不吸菸,你可以擁有更多
You can have more if you quit smoking
二手菸引發兒童肺炎、中耳炎、癌症
Second hand smoking causes children pneumonia, otitis media, or cancer
吸菸影響口腔衛生
Smoking affects oral hygiene
吸菸引發自己與家人中風與心臟病
Smoking causes you and your family paralyzing stroke and heart disease
Whether the warning is the old version or the new version, it will be marked with "戒煙專線0800-636363" (Smoking Quitting Hotline: 0800-636363).
Thailand
In Thailand, a variety of warnings with graphic, disturbing images of tobacco-related harms (including a tracheotomy and rotting teeth) are placed prominently on cigarette packages. A recent study showed that the warnings made Thai smokers think more often about the health risks of smoking and about quitting smoking. Thailand introduced plain packaging in 2020.
Turkey
Front of packaging (covers 65% of surface)
or
Back of packaging (covers 40% of surface)
(Smokers die younger)
(Smoking clogs the arteries and causes heart attacks and paralysis.)
(Smoking causes lethal lung cancer)
(Smoking while pregnant will harm the baby)
(Protect your children, don't let them breathe your smoke.)
(Health agencies can help you quit smoking)
(Smoking is highly addictive, don't start)
(Stopping smoking reduces the risk of fatal heart and lung diseases)
(Smoking can cause a slow and painful death)
(To quit smoking ask for help from your doctor and ...)
(Smoking will slow the blood flow and cause impotence)
(Smoking causes early ageing of the skin)
(Smoking can damage the sperm and decreases fertility)
(Cigarette smoke contains carcinogens such as benzene, nitrosamines, formaldehyde and hydrogen cyanide.)
Ukraine
The warning messages on Ukrainian cigarette packets are also visually similar to those in European Union countries:
United Kingdom
In 1971, tobacco companies printed on the left side of cigarette packets "WARNING by H.M. Government, SMOKING CAN DAMAGE YOUR HEALTH".
In 1991, the EU tightened laws on tobacco warnings. "TOBACCO SERIOUSLY DAMAGES HEALTH" was printed on the front of all tobacco packs. An additional warning was also printed on the reverse of cigarette packs.
In 2003, new EU regulations required one of the following general warnings must be displayed, covering at least 30% of the surface of the pack:
Smoking kills
Smoking seriously harms you and others around you
Additionally, one of the following additional warnings must be displayed, covering at least 40% of the surface of the pack:
Smokers die younger
Smoking clogs the arteries and causes heart attacks and strokes
Smoking causes fatal lung cancer
Smoking when pregnant harms your baby
Protect children: don't make them breathe your smoke
Your doctor or your pharmacist can help you stop smoking
Smoking is highly addictive, don't start
Stopping smoking reduces the risk of fatal heart and lung diseases
Smoking can cause a slow and painful death
Get help to stop smoking: [telephone]/[postal address]/[internet address]/consult your doctor/pharmacist
Smoking may reduce the blood flow and cause impotence
Smoking causes ageing of the skin
Smoking can damage the sperm and decreases fertility
Smoke contains benzene, nitrosamines, formaldehyde and hydrogen cyanide
From October 2008, all cigarette products manufactured must carry picture warnings to the reverse. Every pack must have one of these warnings by October 2009.
Plain packaging, including prominent and standardised health warnings and minimal manufacturer information, became compulsory for all cigarette and hand-rolling tobacco packs manufactured after May 2016 and sold after May 2017.
United States
In 1966, the United States became the first nation in the world to require a health warning on cigarette packages.
In 1973, the Assistant Director of Research at R.J. Reynolds Tobacco Company wrote an internal memorandum regarding new brands of cigarettes for the youth market. He observed that, "psychologically, at eighteen, one is immortal" and theorized that "the desire to be daring is part of the motivation to start smoking." He stated, "in this sense the label on the package is a plus."
In 1999, Philip Morris USA purchased three brands of cigarettes from Liggett Group Inc. The brands were: Chesterfield, L&M, and Lark. At the time Philip Morris purchased the brands from Liggett, the packaging for those cigarettes included the statement "Smoking is Addictive". After Philip Morris acquired the three Liggett brands, it removed the statement from the packages.
Though the United States started the trend of labeling cigarette packages with warnings, today the country has one of the least restrictive labelling requirements on their packages. Warnings are usually in small typeface placed along one of the sides of the cigarette packs with colors and fonts that closely resemble the rest of the package, so the warnings essentially are integrated and do not stand out with the rest of the cigarette package.
However, this is subject to change as the Family Smoking Prevention and Tobacco Control Act of 2009 requires color graphics with supplemental text that depicts the negative consequences of smoking to cover 50 percent of the front and rear of each pack. The nine new graphic warning labels were announced by the FDA in June 2011 and were required to appear on packaging by September 2012, though this was delayed by legal challenges.
In August 2011, five tobacco companies filed a lawsuit against the FDA in an effort to reverse the new warning mandate. Tobacco companies claimed that being required to promote government anti-smoking campaigns by placing the new warnings on packaging violates the companies' free speech rights. Additionally, R.J. Reynolds, Lorillard, Commonwealth Brands Inc., Liggett Group LLC and Santa Fe Natural Tobacco Company Inc. claimed that the graphic labels are an unconstitutional way of forcing tobacco companies to engage in anti-smoking advocacy on the government's behalf. A First Amendment lawyer, Floyd Abrams, represented the tobacco companies in the case, contending that requiring graphic warning labels on a lawful product cannot withstand constitutional scrutiny. The Association of National Advertisers and the American Advertising Federation also filed a brief in the suit, arguing that the labels infringe on commercial free speech and could lead to further government intrusion if left unchallenged.
On 29 February 2012, US District Judge Richard Leon ruled that the labels violate the right to free speech in the First Amendment. However, the following month the US Court of Appeals for the 6th Circuit upheld the majority of the Tobacco Control Act of 2009, including the part requiring graphic warning labels. In April 2013 the Supreme Court declined to hear the appeal to this ruling, allowing the new labels to stand. As the original ruling against the FDA images was not actually reversed, the FDA will again need to go through the process of developing the new warning labels, and the timetable and final product remain unknown. Also, rulings of the 6th Circuit are precedential only in the states comprising the 6th Circuit, i.e., Michigan, Ohio, Kentucky, and Tennessee.
Cigars
SURGEON GENERAL WARNING: Cigar Smoking Can Cause Cancers of the Mouth And Throat, Even If You Do Not Inhale.
SURGEON GENERAL WARNING: Cigars Are Not A Safe Alternative To Cigarettes.
SURGEON GENERAL WARNING: Tobacco Smoke Increases The Risk of Lung Cancer And Heart Disease, Even in Nonsmokers.
SURGEON GENERAL WARNING: Cigar Smoking Can Cause Lung Cancer And Heart Disease.
SURGEON GENERAL WARNING: Tobacco Use Increases The Risk of Infertility, Stillbirth, And Low Birth Weight.
SURGEON GENERAL WARNING: This Product Contains/Produces Chemicals Known to the State of California To Cause Cancer, And Birth Defects Or Other Reproductive Harm.
Stronger warning labels started to appear in May 2010.
Smokeless tobacco
Effective June 2010, the following labels began to appear on smokeless tobacco products (also known as chewing tobacco) and their advertisements.
WARNING: This product can cause mouth cancer.
WARNING: This product can cause gum disease and tooth loss.
WARNING: This product is not a safe alternative to cigarettes.
WARNING: Smokeless tobacco is addictive.
The new warnings are required to comprise 30 percent of two principal display panels on the packaging; on advertisements, the health warnings must constitute 20 percent of the total area.
Uruguay
In Uruguay, a variety of warnings with graphic, disturbing images of tobacco-related harms (including a lung cancer and mouth cancer) are placed prominently on cigarette packages.
Venezuela
For many years in Venezuela, the only warning in cigarette packs was printed in a very small typeface along one of the sides:
"" (It has been determined that cigarette smoking is harmful to your health, Cigarette Tax Law) Since 14 September 1978
On 24 March 2005, another warning was introduced in every cigarette pack: "Este producto contiene alquitrán, nicotina y monóxido de carbono, los cuales son cancerígenos y tóxicos. No existen niveles seguros para el consumo de estas sustancias" ("This product contains tar, nicotine and carbon monoxide, which are carcinogenic and toxic. There are no safe levels for consumption of these substances").
1978's warning was not removed, so now every cigarette pack contains both warnings (one on each lateral).
In addition, since 24 March 2005, one of the following warnings is randomly printed very prominently, along with a graphical image, occupying the 100% of the back of the pack (40% for the text warning and 60% for the image):
(This product is hazardous to your health and is addictive)
(Smoking causes bad breath, tooth decay and mouth cancer)
(Smoking causes lung cancer, coughing, pulmonar emphysema and chronic bronchitis), the picture is a comparison between a smoker's lung (left) and a healthy lung (right)
(Smoking causes cardiac infarction, R.I.P. bearer, Killed by smoking)
(Smoking while pregnant harms your baby)
(Children start smoking when they see adults smoke)
(Smoking cigarettes causes larynx cancer)
(Smoking causes impotence in men)
(Cigarette smoke also harms those who don't smoke)
(take today your first step, quitting is possible)
In the campaign called: "Venezuela 100% libre de humo" (Venezuela, 100% smoke-free), curiously, these warnings only appear on cigarette packs and not on other tobacco products (which only conserve the 1978 warning).
Vietnam
In Vietnam, a variety of warnings with graphic, disturbing images of tobacco-related harms (including a tracheotomy and rotting teeth) are placed prominently on cigarette packages.
References
External links
Tobacco Labelling Resource Centre
Warning: Graphic Cigarette Labels — slideshow by Life magazine
BBC News Online: "Spoof cigarette warnings slammed"
Directive 2001/37/EC of the European Parliament and of the Council of 5 June 2001
UCSF Tobacco Industry Videos Collection
Quit Now, website advertised on Australian packets of cigarettes
Cigarette Package Health Warnings International Status Report - Fourth Edition
Tobacco control
Cigarette packaging
Safety
Warning systems
Health effects of tobacco |
39600950 | https://en.wikipedia.org/wiki/List%20of%20Monterrey%20Institute%20of%20Technology%20and%20Higher%20Education%20faculty | List of Monterrey Institute of Technology and Higher Education faculty | This list of Monterrey Institute of Technology and Higher Education faculty includes current and former instructors and administrators of the Monterrey Institute of Technology and Higher Education, a university and high school system located in various parts of Mexico.
Eugenio Garza Sada Founder of ITESM
Past and present faculty
Bedrich Benes - Computer Science
Ismael Aguilar Barajas - Economics
Horacio Ahuett Garza - Mechanical engineering
Mario Moises Alvarez - Chemistry
José Emilio Amores - Chemistry and cultural promoter
León Ávalos y Vez - first director of the institution
Tamir Bar-On - Political Science
Alberto Bustani Adem
René Cabral Torres - Economics
Francisco Javier Carrillo Gamboa - Knowledge systems
María de la Luz Casas Pérez - Communications/Political Science
María de la Cruz Castro Ricalde - Literature
Susana Catalina Chacón Domínguez - International relations
Cristóbal Cobo - communications, new technology
Delia Elva Cruz Vega - Medicine
Anabella del Rosario Davila Martínez - Business
María de Lourdes Dieck-Assad - Economics, Former President of EGADE Business School
Ernesto Enkerlin - Environmental Studies
Jurgen Faust - Professor of Design
Dora Elvira García González - philosophy
Silverio García Lara - Biotechnology
Noemi García Ramírez - Medicine
María Teresa González-Garza y Barron - Biological sciences
José Luis González Velarde - Mechanical engineering
Carlos Guerrero de Lizardi - Economics/Public policy
Julio César Gutiérrez Vega - Physics
George Haley - Marketing
Usha Haley - International Business
Carmen Hernández Brenes - Biotechnology)
Bryan William Husten Corregan - Business
Jorge Ibarra Salazar - Economics
Vyacheslav Kalashnikov Polishchuk - Mathematics
Sergei Kanaoun Mironov - Manufacturing systems
Blanca Guadalupe López Morales - Humanities
José Carlos Lozano Rendón - Communications
Ernesto Martens- Chemical engineering
Carlos Medina Plascencia - Political Science
María Elena Meneses Rocha - Journalism
Arturo Molina Gutiérrez - Computer science
Isidro Morales Moreno - Political science
Héctor Moreira Rodríguez - administration
Daniel Moska Arreola - Finance and administration
Javier Gonzalez-Sanchez - Computer Science, Former CS Program Director (Guadalajara campus)
Maria Elena Chavez-Echeagaray - Computer Science, Former CS Program Director (Guadalajara campus)
David Muñoz Rodríguez - Electrical engineering
Rubén Nuñez de Cáceres - Ethics, founder of the Centro de Valores Humanos
Joaquín Esteban Osaguera Peña - Physics
Raúl Monroy Borja - Computer science
Alejandro Poiré Romero - Dean of the School of Social Sciences and Government
Pol Popovic Karic - Literature
Oliver Matthias Probst Oleszewski - Physics
Rajagopal - Management)
David Noel Ramírez Padilla - Business, former rector of the system
Rafael Rangel Sostmann - Engineering, former rector of the system
Javier Francisco Reynoso Javier - Business
Marco Rito-Palomares - Biochemical Engineering
Mireille Roccatti - Political science
Eduardo Rodríguez Oreggi y Roman - Public policy
Ramón Martín Rodríguez Dagnino - Communications
Ciro Ángel Rodríguez González - Manufacturing
Mark B. Rosenberg - Political Science/Latin American Studies
Julio E. Rubio. Mexico City Regional Dean of the School of Humanities and Education
Olimpia Salas Martínez - Materials engineering
Pablo Telman Sánchez Ramírez - International law
José Fernández Santillán - Political Science
Roberto Joaquín Santillán Salgado -Business administration
Arturo Santos García -Medicine
Macario Schettino - economics, political science
Sergio Román Othón Serna Saldívar - Biotechnology
Eduardo Sojo Garza-Aldape - Economics
María Isabel Studer Noguez - International relations
Guillermo Torre Amione - Medicine
Pedro Ruben Torres Estrada - Law
Carlos Manuel Urzúa Macías - Economics, Former Secretary of Finance and Public Credit of Mexico.
Cesar Vargas Rosales - Electrical engineering
David Velázquez Fernández - Medicine
Jorge Santos Welti Chanes - Biotechnology
Adrianni Zanatta Alarcón - Mechatronics Engineering
Zidane Zeraoui El Awad - Political Science
Alex Elias Zuñiga - Mechanical engineering
Roberto F Delgadillo - Biophysical chemistry: FRET, thermodynamics, fast kinetics, fluorescence, malaria, toxoplasma, drug screening, synthetic peptides, biomedicine.
Carlos Elizondo Mayer-Serra - Economics, Former Ambassador of Mexico for the OECD.
References
Monterrey Institute of Technology and Higher Education faculty |
51113869 | https://en.wikipedia.org/wiki/Prisma%20%28app%29 | Prisma (app) | Prisma is a photo-editing mobile application that uses neural networks and artificial intelligence to apply artistic effects to transform images.
The app was created by Alexey Moiseenkov (), Oleg Poyaganov, Ilya Frolov, Andrey Usoltsev. It was launched in June 2016 as a free mobile app. It debuted on iOS on Apple App Store during the first week of June and it became the leading app at the App Store in Russia and other neighboring countries. A week after its launch, the app received over 7.5 million downloads. It had over 1 million active users as of July 2016. On 19 July, 2016, the developer launched a beta version of the app for Android, which the developers closed a few hours later after receiving feedback from users. This version was released publicly on 24 July, 2016 on Google Play.
In July 2016, the developer announced a video and virtual reality version of the app was under development.
On 7 July, 2017, Prisma launched a new app called Sticky which turns selfies into stickers for sharing to social feeds.
History
The app was created by the team led by Alexey Moiseenkov who also founded the Prisma labs, based in Moscow. Moiseenkov previously worked at Mail.Ru and later resigned from his job to dedicate his time to the development of the app. He has said that the development of the app took one and a half months and the team did not do anything to promote the app.
The algorithm that powers the app is based on the open source programming and algorithms behind DeepArt.
Features
Users can upload pictures and select a variety of filters to transform the picture with an artistic effect. At launch, the app offered twenty different filters. Additional filters are added daily. In July 2016, Moiseenkov stated that the app will offer forty filters by the end of the month.
The image rendering takes place in Prisma labs's servers and uses a neural network with artificial intelligence to add the artistic effect. The result is delivered back to the user's phone. Unlike other photo editing apps, Prisma renders the image by going through different layers and recreating the image rather than inserting a layer over the image.
In August 2016, the iOS version of the app was updated to edit image offline by utilizing the phone's processor for image rendering.
Reception
Downloads
One week after its debut on iOS App Store, the app was downloaded over 7.5 million times and received over 1 million active users. It also became the top listed app in Russia and its neighboring countries. In the end of July 2016, it was installed over 12.5 million devices with over 1.5 million active users worldwide. According to App Annie, it was listed in the top 10 apps on the App Store in 77 different countries.
On the first day of the Android version release, it received over 1.7 million downloads with 50 million pictures processed by the app.
Research and technology
The research paper behind the Prisma App technology is called "A Neural Algorithm of Artistic Style" by Leon Gatys, Alexander Ecker and Matthias Bethge and was presented at the premier machine learning conference: Neural Information Processing Systems (NIPS) in 2015. The technology is an example of a Neural Style Transfer algorithm. This technology used in Prisma was developed independently and before Prisma, and both the university and the company have no affiliation with one another.
Further recent work developed by Stanford University, titled Perceptual Losses for Real-Time Style Transfer and Super-Resolution by Justin Johnson, Alexandre Alahi and Li Fei-Fei, has also been able to create real-time style transfer through video.
The code for the previous papers is available at no charge at GitHub for research purposes.
See also
DeepArt
List of Prisma (app) filters
References
External links
2016 software
Mobile software
Photo software
Video software
IOS software
Android (operating system) software
Neural network software
Companies based in Moscow |
13578914 | https://en.wikipedia.org/wiki/Supervisor%20Call%20instruction | Supervisor Call instruction | This article covers the specific instruction on the IBM System/360 and successor mainframe computers, and compatible machines. For the general concept of an instruction for issuing calls to an operating system, see System call.
A Supervisor Call instruction (SVC) is a hardware instruction used by the System/360 family of IBM mainframe computers up to contemporary zSeries, the Amdahl 470V/5, 470V/6, 470V/7, 470V/8, 580, 5880, 5990M, and 5990A, and others; Univac 90/60, 90/70 and 90/80, and possibly others; the Fujitsu M180 (UP) and M200 (MP), and others; and is also used in the Hercules open source mainframe emulation software. It causes an interrupt to request a service from the operating system. The system routine providing the service is called an SVC routine. SVC is a system call.
Rationale
IBM mainframes in the System/360 and successor families operate in one of two states: problem state or supervisor state and in one of sixteen storage access keys (0 to 15). In problem state, a large set of general purpose non-privileged instructions are available to a user program. In supervisor state, system programs are additionally able to use a small set of privileged instructions which are generally intended for supervisory functions. These functions may affect other users, other processors, or the entire computer system. In storage key 0 a program is able to access all addressable storage, otherwise it is limited to storage areas with a matching key.
A program is only allowed to access specific supervisory functions after thorough authorization checking by the operating system: DEBCHK (SVC 117), TESTAUTH (SVC 119), and possibly additional tests. Programs which fail any of these tests are ABENDed, that is abnormally terminated and immediately cease processing. Some of these tests were not available in OS/360, but were added in OS/VS1, SVS or MVS/370, but all were available in MVS/370 or subsequent releases, and are still available to this day.
In OS/VS1, OS/VS2 (SVS), MVS/370 and subsequent versions of the OS, the MODESET function (SVC 107) obviated the need for many user-written SVCs as this system SVC accommodated both changes in mode (problem state to supervisor state) and key (8-15 [ user ] to 0-7 [ system ] ) in a single operation, and many user-written SVCs were originally intended for simple mode and key changes, anyway, and subsequently the only special requirement was that the jobstep be APF authorized and that the MODESET-invoking program be resident in a concatenation of libraries all of which were identified as authorized, and this secure approach was completely under the installation's control. This approach generally simplified user controls over authorization, although some simple changes to the application were thereby required. In general, user installations favored this approach, and the overall reliability of the system was significantly improved thereby.
Although mainframe applications are typically synchronous processes, the operating system itself is naturally asynchronous, although the system also supports many processes which are naturally synchronous. When an application requests a system service which is naturally asynchronous, such as input/output processing, a mechanism for synchronizing the application and the operating system must be employed. This essential mechanism is through functions which are built into the operating system, or are specifically supported by it, including: WAIT (temporarily halt application processing until an external event has occurred); POST (indicate occurrence of an external event so application processing may continue); and SYNCH (change system processing mode—supervisor to user and system key to user key—while preserving system integrity, and synchronously perform a function on behalf of the application, after which supervisor processing may continue).
The OS/360 SVCs table below indicates the conditions under which these synchronizing facilities may be employed.
Implementation
SVC is a two byte instruction with the hexadecimal operation code ; the second byte of the instruction, the SVC number, indicates the specific request. The SVC number can be any value from 0 to 255, with the particular SVC number being up to the implementer of the operating system, e.g. on IBM's MVS, SVC 3 is used to terminate a program, while on the UNIVAC VS/9 and Fujitsu BS2000 operating systems, SVC 9 was used for the same purpose.
When a program issues an SVC, an interrupt occurs. The PSW, an 8-byte (on the System 360 and S/370) or 16 byte (on the z/System), privileged register containing, among other things, the current address of the instruction to be executed, the privilege bit (1 if privileged), and storage key, is saved at a real address. This is locations 32-39 on the 360 and 370; 320-335 on the z/System. The PSW is then loaded from a different realaddres; it is 96-103 on the 360 and 370, 448-463 on the z/system. Execution resumes at the address that was loaded into the PSW. Bits 24-31 of the saved PSW (realaddres 35 on the 360 and 370, 323 on the z/System) contain the Supervisor call number.
SVC invokes a supervisory function—usually implemented as a "closed subroutine" of the system's SVC interrupt handler. Information passed to and from the SVC routines is passed in general purpose registers or in memory.
Under OS/360 and successors, return from an SVC routine is, for type 2, 3 and 4 SVC routines, via an SVC 3 (EXIT) invocation, and for other SVC types by the privileged Load PSW (LPSW) instruction, and which is executed on behalf of the SVC routine by the control program's dispatcher or SVC interrupt handler.
On non-IBM developed operating systems such as MUSIC/SP developed by McGill University in Montreal, Canada for IBM mainframes, and for non-IBM mainframes, VS/9, developed by Univac (from the TSOS operating system for the RCA Spectra 70 series computers) for the UNIVAC Series 90 mainframe line, and the B800 operating system (also developed from the TSOS operating system) for Fujitsu's mainframes, all use the LPSW instruction to exit from a Supervisor Call.
The choice on whether to have a supervisor call return to the calling program directly through an LPSW instruction or through some other means such as a subroutine return instruction or a supervisor call itself, is a matter of design. There is no obvious "right" way to do this; there can be reasons for both methods. Using an LPSW instruction to exit an SVC routine allows for faster execution, but means actual testing of the routine has to be done on a dedicated machine running the code as part of an actual operating system supervisor. If the code was written as an ordinary subroutine it can be tested in the same manner as any ordinary program and potentially deployed without having to modify it. It also would allow metrics to be measured, as to how long a supervisor call routine took to complete its task, allowing for analysis of routines that are excessively long in execution time (or, ones that are very fast).
In OS/360 and later incarnations of the OS, branch and link entry points are alternatives to SVC invocations for some supervisor mode routines. In MVS/SP V1R3 and later incarnations of the OS, Program Call (PC) entries have augmented SVCs for invocations of many supervisory functions by both authorized and unauthorized programs; and some functions may only be invoked by branch or PC entries, e.g. STARTIO. (This also has the advantage of preventing IBM operating systems from being run on non-IBM hardware.)
Different IBM operating systems have little compatibility in the specific codes used or in the supervisor services which may be invoked. VM/370 and z/VM systems use the DIAG instruction in a similar manner, and leave SVC for the use by operating systems running in virtual machines. Most OS/360 SVCs have been maintained for "legacy" programs, but some SVCs have been "extended" over the passage of time.
OS/360 and successor system SVCs
In OS/360 and successor systems SVC numbers 0 through approximately 127 are defined by IBM, and 255 downwards are available for use by an installation's systems programming staff. z/OS changed this to SVC numbers 0 through approximately 200 for IBM, and 255 downwards for the installation, as additional system services, primarily in support of encryption/decryption, were being implemented by IBM using SVCs. SVC routines must have module names in a specific format beginning with IGC.
By system design, the term "disabled" means disabled for all interruptions except for machine check interruptions in pre-MVS/370 systems, and with the "local lock" being held, but not "disabled" for any interruptions in MVS/370 and all later systems. The former is physical disablement, the latter is logical disablement, as an address space's "local lock" has the same impact within its address space as physical disablement, but it has no impact on other address spaces.
OS/360 defined four types of SVC routines, called "Type 1" through "Type 4"; MVS/370 added an additional "Type 6", which is similar to "Type 1" except that the SVC routine is physically disabled. "Type 5" was neither defined nor implemented. The following information, part of a table for OS/360, augmented for MVS/370 and successor systems, gives an idea of the considerations involved in writing an SVC routine.
The size restrictions on types 3 and 4 SVC routines are necessary because they are loaded into designated "transient areas" (PLPA in post-MVT) when invoked.
An example of Type 1 is SVC 10, used for both GETMAIN and FREEMAIN, which allocates an area of main storage to a task and to subsequently release it, respectively. SVC 10 is known informally as "REGMAIN" as it exchanges parameters through general purpose registers, only, and can both GET and FREE storage. SVC 4 and SVC 5 can perform similar GET and FREE functions, respectively, but exchange parameters through in-storage parameter lists.
An example of Type 2 is SVC 42, ATTACH, which creates a new task.
An example of Type 3 is SVC 33, IOHALT, which terminates I/O operations on a non-DASD device. This SVC was changed to Type 2 in OS/VS as IOHALT is heavily utilized in many teleprocessing-based systems.
An example of a Type 4 is SVC 19, OPEN, used to make a dataset available for use by a user program, which includes modules common to all access methods and calls additional modules specific to each access method. OPEN also supports datasets which are to be operated on by a "roll your own" access method, such as those which are accessed using EXCP.
An example of Type 6 is SVC 107, MODESET, which obtains no locks, but is able to change system mode and system key, in accordance with passed parameters.
Security
OS/360 did not, in general, have any way of restricting the use of SVCs. Consequently, there were quite a number of unintentional system- and data-integrity exposures which were possible by employing certain sequences of SVCs and other instructions. It became common practice for curious users to attempt to discover these exposures, but some system programmers used these exposures rather than develop their own user-written SVCs.
Beginning with MVS/370, IBM considered it a product defect if a system design error would allow an application program to enter supervisor state without authorization. They mandated that all IBM SVCs be protected to close all system- and data-integrity exposures. They "guaranteed" to close such exposures as these were discovered. By Release 3.7 of MVS/370 in 1977 nearly every such exposure had indeed been identified and closed, at the cost of 100,000 Authorized Program Analysis Reports (APARs) and related Program temporary fixes (PTFs). This was a remarkable achievement, as system "up time" was thereafter measured in years, rather than in days or even in hours.
Notes
References
Further reading
IBM mainframe operating systems
System calls |
18938226 | https://en.wikipedia.org/wiki/Digital%20rights%20management | Digital rights management | Digital rights management (DRM) tools or technological protection measures (TPM) are a set of access control technologies for restricting the use of proprietary hardware and copyrighted works. DRM technologies try to control the use, modification, and distribution of copyrighted works (such as software and multimedia content), as
well as systems within devices that enforce these policies.
Worldwide, many laws have been created which criminalize the circumvention of DRM, communication about such circumvention, and the creation and distribution of tools used for such circumvention. Such laws are part of the United States' Digital Millennium Copyright Act (DMCA), and the European Union's Information Society Directive (the French DADVSI is an example of a member state of the European Union implementing the directive).
Common DRM techniques include restrictive licensing agreements: The access to digital materials, copyright and public domain is restricted to consumers as a condition of entering a website or when downloading software.
Encryption, scrambling of expressive material and embedding of a tag, which is designed to control access and reproduction of information, including backup copies for personal use. DRM technologies enable content publishers to enforce their own access policies on content, such as restrictions on copying or viewing. These technologies have been criticized for restricting individuals from copying or using the content legally, such as by fair use. DRM is in common use by the entertainment industry (e.g., audio and video publishers). Many online music stores, such as Apple's iTunes Store, and e-book publishers and vendors, such as OverDrive, also use DRM, as do cable and satellite service operators, to prevent unauthorized use of content or services. However, Apple dropped DRM from all iTunes music files around 2009.
Industry has expanded the usage of DRM to more traditional hardware products, such as Keurig's coffeemakers, Philips' light bulbs, mobile device power chargers, and John Deere's tractors. For instance, tractor companies try to prevent farmers from making DIY repairs under the usage of DRM laws such as DMCA.
The use of digital rights management is not always without controversy. DRM users argue that the technology is necessary to prevent intellectual property such as media from being copied, just as physical locks are needed to prevent personal property from being stolen, that it can help the copyright holder maintain artistic control, and to support licensing modalities such as rentals. Critics of DRM contend that there is no evidence that DRM helps prevent copyright infringement, arguing instead that it serves only to inconvenience legitimate customers, and that DRM can stifle innovation and competition. Furthermore, works can become permanently inaccessible if the DRM scheme changes or if the service is discontinued. DRM can also restrict users from exercising their legal rights under the copyright law, such as backing up copies of CDs or DVDs (instead having to buy another copy, if it can still be purchased), lending materials out through a library, accessing works in the public domain, or using copyrighted materials for research and education under the fair use doctrine.
Introduction
The rise of digital media and analog-to-digital conversion technologies has vastly increased the concerns of copyright-owning individuals and organizations, particularly within the music and movie industries. While analog media inevitably lose quality with each copy generation, and in some cases even during normal use, digital media files may be duplicated an unlimited number of times with no degradation in the quality. The rise of personal computers as household appliances has made it convenient for consumers to convert media (which may or may not be copyrighted) originally in a physical, analog or broadcast form into a universal, digital form (this process is called ripping) for portability or viewing later. This, combined with the Internet and popular file-sharing tools, has made unauthorized distribution of copies of copyrighted digital media (also called digital piracy) much easier.
In 1983, a very early implementation of digital rights management (DRM) was the Software Service System (SSS) devised by the Japanese engineer Ryuichi Moriya. and subsequently refined under the name superdistribution. The SSS was based on encryption, with specialized hardware that controlled decryption and also enabled payments to be sent to the copyright holder. The underlying principle of the SSS and subsequently of superdistribution was that the distribution of encrypted digital products should be completely unrestricted and that users of those products would not just be permitted to redistribute them but would actually be encouraged to do so.
Technologies
Verifications
Product keys
One of the oldest and least complicated DRM protection methods for computer and Nintendo Entertainment System games was when the game would pause and prompt the player to look up a certain page in a booklet or manual that came with the game; if the player lacked access to such material, they would not be able to continue the game. A product key, a typically alphanumerical serial number used to represent a license to a particular piece of software, served a similar function. During the installation process or launch for the software, the user is asked to input the key; if the key correctly corresponds to a valid license (typically via internal algorithms), the key is accepted, then the user who bought the game can continue. In modern practice, product keys are typically combined with other DRM practices (such as online "activation"), as the software could be cracked to run without a product key, or "keygen" programs could be developed to generate keys that would be accepted.
Limited install activations
Some DRM systems limit the number of installations a user can activate on different computers by requiring authentication with an online server. Most games with this restriction allow three or five installs, although some allow an installation to be recovered when the game is uninstalled. This not only limits users who have more than three or five computers in their homes, but can also prove to be a problem if the user upgrades the computer's operating systems or reformats the computer's storage device.
In mid-2008, the Windows version of Mass Effect marked the start of a wave of titles primarily making use of SecuROM for DRM and requiring authentication with a server. The use of the DRM scheme in 2008's Spore backfired and there were protests, resulting in a considerable number of users seeking an unlicensed version instead. This backlash against the three-activation limit was a significant factor in Spore becoming the most pirated game in 2008, with TorrentFreak compiling a "top 10" list with Spore topping the list. However, Tweakguides concluded that the presence of intrusive DRM does not appear to increase video game piracy, noting that other games on the list, such as Call of Duty 4 and Assassin's Creed, use DRM which has no install limits or online activation. Additionally, other video games that do use intrusive DRM, such as BioShock, Crysis Warhead, and Mass Effect, do not appear on the list.
Persistent online authentication
Many mainstream publishers continued to rely on online DRM throughout the later half of 2008 and early 2009, including Electronic Arts, Ubisoft, Valve, and Atari, The Sims 3 being a notable exception in the case of Electronic Arts. Ubisoft broke with the tendency to use online DRM in late 2008, with the release of Prince of Persia as an experiment to "see how truthful people really are" regarding the claim that DRM was inciting people to use illegal copies. Although Ubisoft has not commented on the results of the "experiment", Tweakguides noted that two torrents on Mininova had over 23,000 people downloading the game within 24 hours of its release.
Ubisoft formally announced a return to online authentication on 9 February 2010, through its Uplay online game platform, starting with Silent Hunter 5, The Settlers 7, and Assassin's Creed II. Silent Hunter 5 was first reported to have been compromised within 24 hours of release, but users of the cracked version soon found out that only early parts of the game were playable. The Uplay system works by having the installed game on the local PCs incomplete and then continuously downloading parts of the game code from Ubisoft's servers as the game progresses. It was more than a month after the PC release in the first week of April that software was released that could bypass Ubisoft's DRM in Assassin's Creed II. The software did this by emulating a Ubisoft server for the game. Later that month, a real crack was released that was able to remove the connection requirement altogether.
In March 2010, Uplay servers suffered a period of inaccessibility due to a large-scale DDoS attack, causing around 5% of game owners to become locked out of playing their game. The company later credited owners of the affected games with a free download, and there has been no further downtime.
Other developers, such as Blizzard Entertainment are also shifting to a strategy where most of the game logic is on the "side" or taken care of by the servers of the game maker. Blizzard uses this strategy for its game Diablo III and Electronic Arts used this same strategy with their reboot of SimCity, the necessity of which has been questioned.
Encryption
An early example of a DRM system is the Content Scramble System (CSS) employed by the DVD Forum on DVD movies. CSS uses an encryption algorithm to encrypt content on the DVD disc. Manufacturers of DVD players must license this technology and implement it in their devices so that they can decrypt the encrypted content to play it. The CSS license agreement includes restrictions on how the DVD content is played, including what outputs are permitted and how such permitted outputs are made available. This keeps the encryption intact as the video material is played out to a TV.
In 1999, Jon Lech Johansen released an application called DeCSS, which allowed a CSS-encrypted DVD to play on a computer running the Linux operating system, at a time when no licensed DVD player application for Linux had yet been created. The legality of DeCSS is questionable: one of the authors has been the subject of a lawsuit, and reproduction of the keys themselves is subject to restrictions as illegal numbers.
Encryption can ensure that other restriction measures cannot be bypassed by modifying the software, so sophisticated DRM systems rely on encryption to be fully effective. More modern examples include ADEPT, FairPlay, Advanced Access Content System.
Copying restriction
Further restrictions can be applied to electronic books and documents, in order to prevent copying, printing, forwarding, and saving backups. This is common for both e-publishers and enterprise Information Rights Management. It typically integrates with content management system software but corporations such as Samsung Electronics also develop their own custom DRM systems.
While some commentators believe DRM makes e-book publishing complex, it has been used by organizations such as the British Library in its secure electronic delivery service to permit worldwide access to substantial numbers of rare documents which, for legal reasons, were previously only available to authorized individuals actually visiting the Library's document centre at Boston Spa in England.
There are four main e-book DRM schemes in common use today, one each from Adobe, Amazon, Apple, and the Marlin Trust Management Organization (MTMO).
Adobe's DRM is applied to EPUBs and PDFs, and can be read by several third-party e-book readers, as well as Adobe Digital Editions (ADE) software. Barnes & Noble uses a DRM technology provided by Adobe, applied to EPUBs and the older PDB (Palm OS) format e-books.
Amazon's DRM is an adaption of the original Mobipocket encryption and is applied to Amazon's .azw4, KF8, and Mobipocket format e-books. Topaz format e-books have their own encryption system.
Apple's FairPlay DRM is applied to EPUBs and can currently only be read by Apple's iBooks app on iOS devices and Mac OS computers.
The Marlin DRM was developed and is maintained in an open industry group known as the Marlin Developer Community (MDC) and is licensed by MTMO. (Marlin was founded by five companies, Intertrust, Panasonic, Philips, Samsung, and Sony.) The Kno online textbook publisher uses Marlin to protect e-books it sells in the EPUB format. These books can be read on the Kno App for iOS and Android.
Anti-tampering
The Microsoft operating system, Windows Vista, contains a DRM system called the Protected Media Path, which contains the Protected Video Path (PVP). PVP tries to stop DRM-restricted content from playing while unsigned software is running, in order to prevent the unsigned software from accessing the content. Additionally, PVP can encrypt information during transmission to the monitor or the graphics card, which makes it more difficult to make unauthorized recordings.
Bohemia Interactive have used a form of technology since Operation Flashpoint: Cold War Crisis, wherein if the game copy is suspected of being unauthorized, annoyances like guns losing their accuracy or the players being turned into a bird are introduced. Croteam, the company that released Serious Sam 3: BFE in November 2011, implemented a different form of DRM wherein, instead of displaying error messages that stop the illicit version of the game from running, it causes a special invincible foe in the game to appear and constantly attack the player until they are killed.
Regional lockout
Also in 1999, Microsoft released Windows Media DRM, which read instructions from media files in a rights management language that stated what the user may do with the media. Later versions of Windows Media DRM implemented music subscription services that make downloaded files unplayable after subscriptions are cancelled, along with the ability for a regional lockout.
Tracking
Watermarks
Digital watermarks are steganographically embedded within audio or video data during production or distribution. They can be used for recording the copyright owner, the distribution chain or identifying the purchaser of the music. They are not complete DRM mechanisms in their own right, but are used as part of a system for copyright enforcement, such as helping provide prosecution evidence for legal purposes, rather than direct technological restriction.
Some programs used to edit video and/or audio may distort, delete, or otherwise interfere with watermarks. Signal/modulator-carrier chromatography may also separate watermarks from original audio or detect them as glitches. Additionally, comparison of two separately obtained copies of audio using simple, home-grown algorithms can often reveal watermarks.
Metadata
Sometimes, metadata is included in purchased media which records information such as the purchaser's name, account information, or email address. Also included may be the file's publisher, author, creation date, download date, and various notes. This information is not embedded in the played content, like a watermark, but is kept separate, but within the file or stream.
As an example, metadata is used in media purchased from Apple's iTunes Store for DRM-free as well as DRM-restricted versions of their music or videos. This information is included as MPEG standard metadata.
Television
The CableCard standard is used by cable television providers in the United States to restrict content to services to which the customer has subscribed.
The broadcast flag concept was developed by Fox Broadcasting in 2001, and was supported by the MPAA and the U.S. Federal Communications Commission (FCC). A ruling in May 2005 by a United States courts of appeals held that the FCC lacked authority to impose it on the TV industry in the US. It required that all HDTVs obey a stream specification determining whether a stream can be recorded. This could block instances of fair use, such as time-shifting. It achieved more success elsewhere when it was adopted by the Digital Video Broadcasting Project (DVB), a consortium of about 250 broadcasters, manufacturers, network operators, software developers, and regulatory bodies from about 35 countries involved in attempting to develop new digital TV standards.
An updated variant of the broadcast flag has been developed in the Content Protection and Copy Management group under DVB (DVB-CPCM). Upon publication by DVB, the technical specification was submitted to European governments in March 2007. As with much DRM, the CPCM system is intended to control use of copyrighted material by the end-user, at the direction of the copyright holder. According to Ren Bucholz of the Electronic Frontier Foundation (EFF), which paid to be a member of the consortium, "You won't even know ahead of time whether and how you will be able to record and make use of particular programs or devices". The normative sections have now all been approved for publication by the DVB Steering Board, and will be published by ETSI as a formal European Standard as ETSI TS 102 825-X where X refers to the Part number of specification. Nobody has yet stepped forward to provide a Compliance and Robustness regime for the standard, so it is not presently possible to fully implement a system, as there is nowhere to obtain the necessary device certificates.
Implementations
Analog Protection System (Macrovision)
DCS Copy Protection
B-CAS
CableCARD
Broadcast flag
DVB-CPCM
Copy Control Information
ISDB#Copy-protection technology
FairPlay
Sony rootkit
Content Scramble System (CSS)
ARccOS protection
Advanced Access Content System (AACS)
Content Protection for Recordable Media (CPRM)
Digital Transmission Content Protection
High-bandwidth Digital Content Protection (HDCP)
Protected Media Path
Trusted Platform Module#Uses
Intel Management Engine#Design
Cinavia
HTML5 video Encrypted Media Extensions (HTML5 EME, often implemented with Widevine)
Denuvo
StarForce
SafeDisc
SecuROM
In addition, platforms such as Steam may include DRM mechanisms. Most of the mechanisms above are not DRM mechanisms per se but are still referred to as DRM mechanisms, rather being copy protection mechanisms.
Laws
The 1996 World Intellectual Property Organization Copyright Treaty (WCT) requires nations to enact laws against DRM circumvention, and has been implemented in most member states of the World Intellectual Property Organization.
The United States implementation is the Digital Millennium Copyright Act (DMCA), while in Europe the treaty has been implemented by the 2001 Information Society Directive, which requires member states of the European Union to implement legal protections for technological prevention measures. , the lower house of the French parliament adopted such legislation as part of the controversial DADVSI law, but added that protected DRM techniques should be made interoperable, a move which caused widespread controversy in the United States. The Tribunal de grande instance de Paris concluded in 2006, that the complete blocking of any possibilities of making private copies was an impermissible behaviour under French copyright law.
China
In 1998 "Interim Regulations" were founded in China, referring to the DMCA. China also has Intellectual Property Rights, which to the World Trade Organization, was "not in compliance with the Berne Convention". The WTO panel "determined that China's copyright laws do not provide the same efficacy to non- Chinese nationals as they do to Chinese citizens, as required by the Berne Convention". and that "China's copyright laws do not provide enforcement procedures so as to permit effective action against any act of infringement of intellectual property rights".
European Union
On 22 May 2001, the European Union passed the Information Society Directive, an implementation of the 1996 WIPO Copyright Treaty, that addressed many of the same issues as the DMCA.
On 25 April 2007, the European Parliament supported the first directive of EU, which aims to harmonize criminal law in the member states. It adopted a first reading report on harmonizing the national measures for fighting copyright abuse. If the European Parliament and the Council approve the legislation, the submitted directive will oblige the member states to consider a crime a violation of international copyright committed with commercial purposes. The text suggests numerous measures: from fines to imprisonment, depending on the gravity of the offense. The EP members supported the Commission motion, changing some of the texts. They excluded patent rights from the range of the directive and decided that the sanctions should apply only to offenses with commercial purposes. Copying for personal, non-commercial purposes was also excluded from the range of the directive.
In 2012, the Court of Justice of the European Union ruled in favor of reselling copyrighted games, prohibiting any preventative action that would prevent such transaction. The court said that "The first sale in the EU of a copy of a computer program by the copyright holder or with his consent exhausts the right of distribution of that copy in the EU. A rightholder who has marketed a copy in the territory of a Member State of the EU thus loses the right to rely on his monopoly of exploitation in order to oppose the resale of that copy."
In 2014, the Court of Justice of the European Union ruled that circumventing DRM on game devices may be legal under some circumstances, limiting the legal protection to only cover technological measures intended to prevent or eliminate unauthorised acts of reproduction, communication, public offer or distribution.
India
India is not a signatory to WIPO Copyright Treaty nor the WIPO Performances and Phonograms Treaty. However, as a part of its 2012 amendment of copyright laws, it implemented digital rights management protection. Section 65A of Copyright Act, 1957 imposed criminal sanctions on circumvention of "effective technological protection measures". Section 65B criminalized interference with digital rights management information. Any distribution of copies whose rights management information was modified was also criminalized by Section 65B. The terms used in the provisions were not specifically defined, with the concerned Parliamentary Standing Committee indicating the same to have been deliberate. The Standing Committee noted that similar terms in developed terms were used to considerable complexity and therefore in light of the same, it was preferable to keep it open-ended.
A prison sentence is mandatory under both provisions, with a maximum term of two years in addition to a fine, which is discretionary. While the statute does not include exceptions to copyright infringement, including fair use directly, Section 65A allows measures "unless they are expressly prohibited", which may implicitly include such exceptions. Section 65B however, lacks any exceptions. Further. Section 65B (digital rights management information) allows resort to other civil provisions, unlike Section 65A.
The WIPO Internet Treaties themselves do not mandate criminal sanctions, merely requiring "effective legal remedies". Thus, India's adoption of criminal sanctions ensures compliance with the highest standards of the WIPO internet treaties. Given the 2012 amendment, India's entry to the WIPO Internet Treaties appears facilitated, especially since ratification of the WIPO Internet Treaties is mandatory under agreements like the RCEP.
Israel
Israel had not ratified the WIPO Copyright Treaty. Israeli law does not currently expressly prohibit the circumvention of technological measures used to implement digital rights management. In June 2012 The Israeli Ministry of Justice proposed a bill to prohibit such activities, but the Knesset did not pass it. In September 2013, the Supreme Court ruled that the current copyright law could not be interpreted to prohibit the circumvention of digital rights management, though the Court left open the possibility that such activities could result in liability under the law of unjust enrichment.
United States
In May 1998, the Digital Millennium Copyright Act (DMCA) passed as an amendment to US copyright law, which criminalizes the production and dissemination of technology that lets users circumvent technical copy-restriction methods. (For a more detailed analysis of the statute, see WIPO Copyright and Performances and Phonograms Treaties Implementation Act.)
Reverse engineering of existing systems is expressly permitted under the Act under the specific condition of a safe harbor, where circumvention is necessary to achieve interoperability with other software . See 17 U.S.C. Sec. 1201(f). Open-source software to decrypt content scrambled with the Content Scrambling System and other encryption techniques presents an intractable problem with the application of the Act. Much depends on the intent of the actor. If the decryption is done for the purpose of achieving interoperability of open source operating systems with proprietary operating systems, it would be protected by Section 1201(f) the Act. Cf., Universal City Studios, Inc. v. Corley, 273 F.3d 429 (2d Cir. 2001) at notes 5 and 16. However, dissemination of such software for the purpose of violating or encouraging others to violate copyrights has been held illegal. See Universal City Studios, Inc. v. Reimerdes, 111 F. Supp. 2d 346 (S.D.N.Y. 2000).
The DMCA has been largely ineffective in protecting DRM systems, as software allowing users to circumvent DRM remains widely available. However, those who wish to preserve the DRM systems have attempted to use the Act to restrict the distribution and development of such software, as in the case of DeCSS.
Although the Act contains an exception for research, the exception is subject to vague qualifiers that do little to reassure researchers. Cf., 17 U.S.C. Sec. 1201(g). The DMCA has affected cryptography, because many fear that cryptanalytic research may violate the DMCA. In 2001, the arrest of Russian programmer Dmitry Sklyarov for alleged infringement of the DMCA was a highly publicized example of the law's use to prevent or penalize development of anti-DRM measures. He was arrested in the US after a presentation at DEF CON, and spent several months in jail. The DMCA has also been cited as chilling to non-criminal inclined users, such as students of cryptanalysis including, Professor Edward Felten and students at Princeton University; security consultants, such as Netherlands based Niels Ferguson, who declined to publish vulnerabilities he discovered in Intel's secure-computing scheme due to fear of being arrested under the DMCA when he travels to the US; and blind or visually impaired users of screen readers or other assistive technologies.
International issues
In Europe, there have been several ongoing dialog activities that are characterized by their consensus-building intention:
January 2001 Workshop on Digital Rights Management of the World Wide Web Consortium .
2003 Participative preparation of the European Committee for Standardization/Information Society Standardization System (CEN/ISSS) DRM Report.
2005 DRM Workshops of Directorate-General for Information Society and Media (European Commission), and the work of the High Level Group on DRM.
2005 Gowers Review of Intellectual Property by the British Government from Andrew Gowers published in 2006 with recommendations regarding copyright terms, exceptions, orphaned works, and copyright enforcement.
2004 Consultation process of the European Commission, DG Internal Market, on the Communication COM(2004)261 by the European Commission on "Management of Copyright and Related Rights" (closed).
The AXMEDIS project, a European Commission Integrated Project of the FP6, has as its main goal automating content production, copy protection, and distribution, to reduce the related costs, and to support DRM at both B2B and B2C areas, harmonizing them.
The INDICARE project is an ongoing dialogue on consumer acceptability of DRM solutions in Europe. It is an open and neutral platform for exchange of facts and opinions, mainly based on articles by authors from science and practice.
Notable lawsuits
DVD Copy Control Association, Inc. v. Bunner
DVD Copy Control Association, Inc. v. Kaleidescape, Inc.
RealNetworks, Inc. v. DVD Copy Control Association, Inc.
Universal v. Reimerdes
Opposition
Many organizations, prominent individuals, and computer scientists are opposed to DRM. Two notable DRM critics are John Walker, as expressed for instance, in his article "The Digital Imprimatur: How Big brother and big media can put the Internet genie back in the bottle", and Richard Stallman in his article The Right to Read and in other public statements: "DRM is an example of a malicious feature – a feature designed to hurt the user of the software, and therefore, it's something for which there can never be toleration". Stallman also believes that using the word "rights" is misleading and suggests that the word "restrictions", as in "Digital Restrictions Management", be used instead. This terminology has since been adopted by many other writers and critics unconnected with Stallman.
Other prominent critics of DRM include Professor Ross Anderson of Cambridge University, who heads a British organization which opposes DRM and similar efforts in the UK and elsewhere, and Cory Doctorow, a writer and technology blogger. The EFF and similar organizations such as FreeCulture.org also hold positions which are characterized as opposed to DRM. The Foundation for a Free Information Infrastructure has criticized DRM's effect as a trade barrier from a free market perspective.
Bill Gates spoke about DRM at CES in 2006. According to him, DRM is not where it should be, and causes problems for legitimate consumers while trying to distinguish between legitimate and illegitimate users.
There have been numerous others who see DRM at a more fundamental level. This is similar to some of the ideas in Michael H. Goldhaber's presentation about "The Attention Economy and the Net" at a 1997 conference on the "Economics of Digital Information". (sample quote from the "Advice for the Transition" section of that presentation: "If you can't figure out how to afford it without charging, you may be doing something wrong.")
The Norwegian consumer rights organization "Forbrukerrådet" complained to Apple Inc. in 2007, about the company's use of DRM in, and in conjunction with, its iPod and iTunes products. Apple was accused of restricting users' access to their music and videos in an unlawful way, and of using EULAs which conflict with Norwegian consumer legislation. The complaint was supported by consumers' ombudsmen in Sweden and Denmark, and is currently being reviewed in the EU. Similarly, the United States Federal Trade Commission held hearings in March 2009, to review disclosure of DRM limitations to customers' use of media products.
Valve president Gabe Newell also stated "most DRM strategies are just dumb" because they only decrease the value of a game in the consumer's eyes. Newell suggests that the goal should instead be "[creating] greater value for customers through service value". Valve operates Steam, a service which serves as an online store for PC games, as well as a social networking service and a DRM platform.
At the 2012 Game Developers Conference, the CEO of CD Projekt Red, Marcin Iwinski, announced that the company will not use DRM in any of its future releases. Iwinski stated of DRM, "It's just over-complicating things. We release the game. It's cracked in two hours, it was no time for Witcher 2. What really surprised me is that the pirates didn't use the GOG version, which was not protected. They took the SecuROM retail version, cracked it and said 'we cracked it' – meanwhile there's a non-secure version with a simultaneous release. You'd think the GOG version would be the one floating around." Iwinski added after the presentation, "DRM does not protect your game. If there are examples that it does, then people maybe should consider it, but then there are complications with legit users."
The Association for Computing Machinery and the Institute of Electrical and Electronics Engineers have historically opposed DRM, even going so far as to name AACS as a technology "most likely to fail" in an issue of IEEE Spectrum.
Tools like FairUse4WM have been created to strip Windows Media of DRM restrictions. Websitessuch as library.nu (shut down by court order on 15 February 2012), BookFi, BookFinder, Library Genesis, and Sci-Hubhave gone further to allow downloading e-books by violating copyright.
Public licenses
The latest version of the GNU General Public License version 3, as released by the Free Software Foundation, has a provision that "strips" DRM of its legal value, so people can break the DRM on GPL software without breaking laws like the DMCA. Also, in May 2006, the FSF launched a "Defective by Design" campaign against DRM.
Creative Commons provides licensing options encouraging the expansion of and building upon creative work without the use of DRM. In addition, Creative Commons licenses have anti-DRM clauses, therefore the use of DRM by a licensee to restrict the freedoms granted by a Creative Commons license is a breach of the Baseline Rights asserted by the licenses.
DRM-free works
In reaction to opposition to DRM, many publishers and artists label their works as "DRM-free". Major companies that have done so include the following:
Apple Inc. sold DRM content on their iTunes Store when it launched in 2003, but made music DRM-free after April 2007 and has been labeling all music as "DRM-Free" since January 2009. The files still carry tags to identify the purchaser. Other works sold on iTunes such as apps, audiobooks, movies, and TV shows continue to be protected by DRM.
Since 2014, Comixology, which distributes digital comics, has allowed rights holders to provide the option of a DRM-free download of purchased comics. Publishers which allow this include Dynamite Entertainment, Image Comics, Thrillbent, Top Shelf Productions, and Zenescope Entertainment.
GOG.com (formerly Good Old Games), a digital distributor since 2008, specializes in the distribution of PC video games. While most other digital distribution services allow various forms of DRM (or have them embedded), gog.com has a strict non-DRM policy.
Tor Books, a major publisher of science fiction and fantasy books, first sold DRM-free e-books in July 2012. Smaller e-book publishers, such as Baen Books and O'Reilly Media, had already forgone DRM previously.
Vimeo on Demand is one of the publishers included in the Free Software Foundation's DRM-free guide.
Shortcomings
Reliability
Many DRM systems require authentication with an online server. Whenever the server goes down, or a region or country experiences an Internet outage, it effectively locks out people from registering or using the material. This is especially true for a product that requires a persistent online authentication, where, for example, a successful DDoS attack on the server would essentially make all copies of the material unusable.
Additionally, any system that requires contact with an authentication server is vulnerable to that server's becoming unavailable, as happened in 2007, when videos purchased from Major League Baseball (mlb.com) prior to 2006 became unplayable due to a change to the servers that validate the licenses.
Usability
Discs with DRM schemes are not standards-compliant compact discs (CDs) but are rather CD-ROM media. Therefore, they all lack the CD logotype found on discs which follow the standard (known as Red Book). These CDs cannot be played on all CD players or personal computers. Personal computers running Microsoft Windows sometimes even crash when attempting to play the CDs.
Performance
Certain DRM systems have been associated with performance drawbacks: some computer games implementing Denuvo Anti-Tamper have performed better after that was patched out. However, the impact on performance can be minimized depending on how the system is integrated. In March 2018, PC Gamer tested Final Fantasy XV for the performance effects of Denuvo, which was found to cause no negative gameplay impact despite a little increase in loading time.
Fundamental bypass
Always technically breakable
DRM schemes, especially software based ones, can never be wholly secure since the software must include all the information necessary to decrypt the content, such as the decryption keys. An attacker will be able to extract this information, directly decrypt and copy the content, which bypasses the restrictions imposed by a DRM system. Even with the industrial-grade Advanced Access Content System (AACS) for HD DVD and Blu-ray Discs, a process key was published by hackers in December 2006, which enabled unrestricted access to AACS-protected content. After the first keys were revoked, further cracked keys were released.
Some DRM schemes use encrypted media which requires purpose-built hardware to decode the content. A common real-world example can be found in commercial direct broadcast satellite television systems such as DirecTV and Malaysia's Astro. The company uses tamper-resistant smart cards to store decryption keys so that they are hidden from the user and the satellite receiver. This appears to ensure that only licensed users with the hardware can access the content. While this in principle can work, it is extremely difficult to build the hardware to protect the secret key against a sufficiently determined adversary. Many such systems have failed in the field. Once the secret key is known, building a version of the hardware that performs no checks is often relatively straightforward. In addition user verification provisions are frequently subject to attack, pirate decryption being among the most frequented ones.
Bruce Schneier argues that digital copy prevention is futile: "What the entertainment industry is trying to do is to use technology to contradict that natural law. They want a practical way to make copying hard enough to save their existing business. But they are doomed to fail." He has also described trying to make digital files uncopyable as being like "trying to make water not wet". The creators of StarForce also take this stance, stating that "The purpose of copy protection is not making the game uncrackable – it is impossible."
Analog recording
All forms of DRM for audio and visual material (excluding interactive materials, e.g., video games) are subject to the analog hole, namely that in order for a viewer to play the material, the digital signal must be turned into an analog signal containing light and/or sound for the viewer, and so available to be copied as no DRM is capable of controlling content in this form. In other words, a user could play a purchased audio file while using a separate program to record the sound back into the computer into a DRM-free file format.
All DRM to date can therefore be bypassed by recording this signal and digitally storing and distributing it in a non DRM limited form, by anyone who has the technical means of recording the analog stream. Furthermore, the analog hole cannot be overcome without the additional protection of externally imposed restrictions, such as legal regulations, because the vulnerability is inherent to all analog means of transmission. However, the conversion from digital to analog and back is likely to force a loss of quality, particularly when using lossy digital formats. HDCP is an attempt to plug the analog hole, although as of 2009, it was largely ineffective.
Asus released a soundcard which features a function called "Analog Loopback Transformation" to bypass the restrictions of DRM. This feature allows the user to record DRM-restricted audio via the soundcard's built-in analog I/O connection.
In order to prevent this exploit, there has been some discussions between copyright holders and manufacturers of electronics capable of playing such content to no longer include analog connectivity in their devices. The movement, dubbed as "Analog Sunset", has seen a steady decline in analog output options on most Blu-ray devices manufactured after 2010.
Consumer rights implication
Ownership issue after purchase
DRM opponents argue that the presence of DRM violates existing private property rights and restricts a range of heretofore normal and legal user activities. A DRM component would control a device a user owns (such as a digital audio player) by restricting how it may act with regard to certain content, overriding some of the user's wishes (for example, preventing the user from burning a copyrighted song to CD as part of a compilation or a review). Doctorow has described this possibility as "the right to make up your own copyright laws".
An example of this restriction to legal user activities may be seen in Microsoft's Windows Vista operating system in which content using a Protected Media Path is disabled or degraded depending on the DRM scheme's evaluation of whether the hardware and its use are 'secure'. All forms of DRM depend on the DRM-enabled device (e.g., computer, DVD player, TV) imposing restrictions that cannot be disabled or modified by the user. Key issues around DRM such as the right to make personal copies, provisions for persons to lend copies to friends, provisions for service discontinuance, hardware agnosticism, software and operating system agnosticism, contracts for public libraries, and customers' protection against one-side amendments of the contract by the publisher have not been fully addressed. It has also been pointed out that it is entirely unclear whether owners of content with DRM are legally permitted to pass on their property as inheritance to another person.
In one instance of DRM that caused a rift with consumers, Amazon.com in July 2009, remotely deleted purchased copies of George Orwell's Animal Farm (1945) and Nineteen Eighty-Four (1949) from customers' Amazon Kindles after providing them a refund for the purchased products. Commentators have described these actions as Orwellian and have compared Amazon to Big Brother from Nineteen Eighty-Four. After Amazon CEO Jeff Bezos issued a public apology, the Free Software Foundation wrote that this was just one more example of the excessive power Amazon has to remotely censor what people read through its software, and called upon Amazon to free its e-book reader and drop DRM. Amazon then revealed the reason behind its deletion: the e-books in question were unauthorized reproductions of Orwell's works, which were not within the public domain and to which the company that published and sold them on Amazon's service had no rights.
Compulsory bundled software
In 2005, Sony BMG introduced new DRM technology which installed DRM software on users' computers without clearly notifying the user or requiring confirmation. Among other things, the installed software included a rootkit, which created a severe security vulnerability others could exploit. When the nature of the DRM involved was made public much later, Sony BMG initially minimized the significance of the vulnerabilities its software had created, but was eventually compelled to recall millions of CDs, and released several attempts to patch the surreptitiously included software to at least remove the rootkit. Several class action lawsuits were filed, which were ultimately settled by agreements to provide affected consumers with a cash payout or album downloads free of DRM.
Obsolescence
When standards and formats change, it may be difficult to transfer DRM-restricted content to new media, for instance Microsoft's new media player Zune did not support content that uses Microsoft's own PlaysForSure DRM scheme they had previously been selling.
Furthermore, when a company undergoes business changes or even bankruptcy, its previous services may become unavailable. Examples include MSN Music, Yahoo! Music Store, Adobe Content Server 3 for Adobe PDF, Acetrax Video on Demand, etc.
Selective enforcement
DRM laws are widely flouted: according to Australia Official Music Chart Survey, copyright infringements from all causes are practised by millions of people. According to the EFF, "in an effort to attract customers, these music services try to obscure the restrictions they impose on you with clever marketing."
Economic implication
Lost benefits from massive market share
Jeff Raikes, ex-president of the Microsoft Business Division, stated: "If they're going to pirate somebody, we want it to be us rather than somebody else". An analogous argument was made in an early paper by Kathleen Conner and Richard Rummelt. A subsequent study of digital rights management for e-books by Gal Oestreicher-Singer and Arun Sundararajan showed that relaxing some forms of DRM can be beneficial to digital rights holders because the losses from piracy are outweighed by the increases in value to legal buyers.
Also, free distribution, even if unauthorized, can be beneficial to small or new content providers by spreading and popularizing content. With a larger consumer base by sharing and word of mouth, the number of paying customers also increases, resulting in more profits. Several musicians have grown to popularity by posting their music videos on sites like YouTube where the content is free to listen to. This method of putting the product out in the world free of DRM not only generates a greater following but also fuels greater revenue through other merchandise (hats, T-shirts), concert tickets, and of course, more sales of the content to paying consumers.
Push away legitimate customer
While the main intent of DRM is to prevent unauthorized copies of a product, there are mathematical models that suggest that DRM schemes can fail to do their job on multiple levels. The biggest failure is the burden that DRM poses on a legitimate customer will reduce the customer's willingness to pay for the product. An ideal DRM would be one which imposes zero restrictions on legal buyers but imposes restrictions on copyright infringers.
In January 2007, EMI stopped publishing audio CDs with DRM, stating that "the costs of DRM do not measure up to the results." In March, Musicload.de, one of Europe's largest internet music retailers, announced their position strongly against DRM. In an open letter, Musicload stated that three out of every four calls to their customer support phone service are as a result of consumer frustration with DRM.
The mathematical models are strictly applied to the music industry (music CDs, downloadable music). These models could be extended to the other industries such as the gaming industry which show similarities to the music industry model. There are real instances when DRM restrain consumers in the gaming industry. Some DRM games are required to connect to the Internet in order to play them. Good Old Games' head of public relations and marketing, Trevor Longino, in agreement with this, believes that using DRM is less effective than improving a game's value in reducing video game infringement. However, TorrentFreak published a "Top 10 pirated games of 2008" list which shows that intrusive DRM is not the main reason why some games are copied more heavily than others. Popular games such as BioShock, Crysis Warhead, and Mass Effect which use intrusive DRM are absent from the list.
Anti-competition practice
The Electronic Frontier Foundation (EFF) and the Free Software Foundation (FSF) consider the use of DRM systems to be an anti-competitive practice.
Alternatives
Several business models have been proposed that offer an alternative to the use of DRM by content providers and rights holders.
"Easy and cheap"
The first business model that dissuades illegal file sharing is to make downloading digital media easy and cheap. The use of noncommercial sites makes downloading digital media complex. For example, misspelling an artist's name in a search query will often fail to return a result, and some sites limit internet traffic, which can make downloading media a long and frustrating process. Furthermore, illegal file sharing websites are often host to viruses and malware which attach themselves to the files (see torrent poisoning). If digital media (for example, songs) are all provided on accessible, legitimate sites, and are reasonably priced, consumers will purchase media legally to overcome these frustrations.
Comedian Louis C.K. made headlines in 2011, with the release of his concert film Live at the Beacon Theater as an inexpensive (US$5), DRM-free download. The only attempt to deter unlicensed copies was a letter emphasizing the lack of corporate involvement and direct relationship between artist and viewer. The film was a commercial success, turning a profit within 12 hours of its release. Some, including the artist himself, have suggested that file sharing rates were lower than normal as a result, making the release an important case study for the digital marketplace.
Webcomic Diesel Sweeties released a DRM-free PDF e-book on author R Stevens's 35th birthday, He followed this with a DRM-free iBook specifically for the iPad, using Apple's new software, which generated more than 10,000 downloads in three days. That led Stevens to launch a Kickstarter project – "ebook stravaganza 3000" – to fund the conversion of 3,000 comics, written over 12 years, into a single "humongous" e-book to be released both for free and through the iBookstore; launched 8 February 2012, with the goal of raising $3,000 in 30 days. The "payment optional" DRM-free model in this case was adopted on Stevens' view that "there is a class of webcomics reader who would prefer to read in large chunks and, even better, would be willing to spend a little money on it."
Crowdfunding or pre-order model
In February 2012, Double Fine asked for crowdfunding for an upcoming video game, Double Fine Adventure, on Kickstarter and offered the game DRM-free for backers. This project exceeded its original goal of $400,000 in 45 days, raising in excess of $2 million. In this case DRM freedom was offered to backers as an incentive for supporting the project before release, with the consumer and community support and media attention from the highly successful Kickstarter drive counterbalancing Also, crowdfunding with the product itself as benefit for the supporters can be seen as pre-order or subscription business model in which one motivation for DRM, the uncertainty if a product will have enough paying customers to outweigh the development costs, is eliminated. After the success of Double Fine Adventure, many games were crowd-funded and many of them offered a DRM-free game version for the backers.
Digital content as promotion for traditional products
Many artists are using the Internet to give away music to create awareness and liking to a new upcoming album. The artists release a new song on the internet for free download, which consumers can download. The hope is to have the listeners buy the new album because of the free download. A common practice used today is releasing a song or two on the internet for consumers to indulge. In 2007, Radiohead released an album named In Rainbows, in which fans could pay any amount they want, or download it for free.
Artistic Freedom Voucher
The Artistic Freedom Voucher (AFV) introduced by Dean Baker is a way for consumers to support "creative and artistic work". In this system, each consumer would have a refundable tax credit of $100 to give to any artist of creative work. To restrict fraud, the artists must register with the government. The voucher prohibits any artist that receives the benefits from copyrighting their material for a certain length of time. Consumers can obtain music for a certain amount of time easily and the consumer decides which artists receive the $100. The money can either be given to one artist or to many, the distribution is up to the consumer.
See also
Anti-tamper software
Closed platform
Digital asset management
Hardware restrictions
License manager
ODRL
Software metering
Software protection dongle
Secure Digital Music Initiative
Trusted Computing
References
Further reading
Lawrence Lessig's Free Culture, published by Basic Books in 2004, is available for free download in PDF format . The book is a legal and social history of copyright. Lessig is well known, in part, for arguing landmark cases on copyright law. A Professor of Law at Stanford University, Lessig writes for an educated lay audience, including for non-lawyers. He is, for the most part, an opponent of DRM technologies.
Rosenblatt, B. et al., Digital Rights Management: Business and Technology, published by M&T Books (John Wiley & Sons) in 2001. An overview of DRM technology, business implications for content publishers, and relationship to U.S. copyright law.
Consumer's Guide to DRM, published in 10 languages (Czech, German, Greek, English, Spanish, French, Hungarian, Italian, Polish, Swedish), produced by the INDICARE research and dialogue project
Eberhard Becker, Willms Buhse, Dirk Günnewig, Niels Rump: Digital Rights Management – Technological, Economic, Legal and Political Aspects. An 800-page compendium from 60 different authors on DRM.
Arun Sundararajan's uses the following digital rights conjecture, that "digital rights increases the incidence of digital piracy, and that managing digital rights therefore involves restricting the rights of usage that contribute to customer value" to show that creative pricing can be an effective substitute for excessively stringent DRM.
Fetscherin, M., Implications of Digital Rights Management on the Demand for Digital Content, provides an excellent view on DRM from a consumers perspective.
The Pig and the Box, a book with colorful illustrations and having a coloring book version, by 'MCM'. It describes DRM in terms suited to kids, written in reaction to a Canadian entertainment industry copyright education initiative, aimed at children.
Present State and Emerging Scenarios of Digital Rights Management Systems – A paper by Marc Fetscherin which provides an overview of the various components of DRM, pro and cons and future outlook of how, where, when such systems might be used.
DRM is Like Paying for Ice – Richard Menta article on MP3 Newswire discusses how DRM is implemented in ways to control consumers, but is undermining perceived product value in the process.
'''' – PhD Thesis by Roberto García that tries to address DRM issues using Semantic Web technologies and methodologies.
Patricia Akester, "Technological Accommodation of Conflicts between Freedom of Expression and DRM: The First Empirical Assessment" available at Technological Accommodation of Conflicts between Freedom of Expression and DRM: The First Empirical Assessment (unveiling, through empirical lines of enquiry, (1) whether certain acts which are permitted by law are being adversely affected by the use of DRM and (2) whether technology can accommodate conflicts between freedom of expression and DRM).
External links
BBC News Technology Q&A: What is DRM?
Copyright vs Community in the Age of Computer Networks by Richard Stallman
from Microsoft
Microsoft Research DRM talk, by Cory Doctorow
iTunes, DRM and competition law by Reckon LLP
from CEN/ISSS (European Committee for Standardization / Information Society Standardization System). Contains a range of possible definitions for DRM from various stakeholders. 30 September 2003
PC Game Piracy Examined Article investigating the effects of DRM and piracy on the video game industry
DRM.info Information about DRM by Chaos Computer Club, Defective by design, Electronic Frontier Foundation, Free Software Foundation Europe, and other organisations.
Copyright law
Cryptography law
Television terminology |
53757580 | https://en.wikipedia.org/wiki/Installation%2001 | Installation 01 | Installation 01 is an upcoming fan-made first-person shooter video game based on the Halo series and developed by Soon Studios (previously known as The Installation 01 Team). Installation 01 is being developed for Microsoft Windows, macOS, and Linux operating systems in the Unity game engine.
Development
Installation 01 is a fan-made, multiplayer-only project designed in Unity for Microsoft Windows, macOS, and Linux by Soon Studios. The intent is to replicate the multiplayer gameplay of Halo: Combat Evolved, Halo 2, Halo 3, Halo 4, and Halo 5 as a tribute to the series of games—with a specific target of Halo 3—and to do so without re-use of Microsoft assets. The game is named after the Halo ring created by the Forerunner race in the Halo universe.
The studio planned to support some features from later Halo games, but only in custom game modes. Game artist Seth H. described a tension in the art and game design of Installation 01 caused by the Halo series' individual game designs. They plan to provide map-making tools.
An extremely-early version of the game was displayed in August 2014. In November 2015, the game was displayed with functional multiplayer in "very small counts" of players; Kotaku noted that the game had not progressed far since August 2014. The team sought to have a playable version of the video game in Q4 2016. The studio was composed of over 30 people at this time.
A test for the multiplayer mode was scheduled for November 2016, with plans for a later single-player mode. At the time, the studio had made a number of familiar game elements, as well as both refreshed and remade versions of certain game maps. Animation polish and additional gameplay elements remained. Kotaku said the game was progressing well in March 2017. In May, a cinematic trailer was released, which featured character and weapons models based on Halo 3 designs. Animator Matthew Lake animated a trailer for both his final academic assignment and to generate excitement for the game. The game's release date was still unknown.
Legality
Soon Studios approached Microsoft around August 2016 regarding Microsoft's intellectual property, but did not receive a reply at the time. GameSpot also inquired, without receiving a response. In June 2017, Soon Studios announced their ongoing communication with 343 Industries and Microsoft. The studio has been transparent with 343 Industries regarding plans for the game. 343 Industries confirmed that the developers are "not under imminent legal threat". Installation 01s development is covered under Microsoft's Game Content Usage Rules, as long as the game remains non-commercial. He elaborates that they will never accept donations, or sell Installation 01 or Halo related merchandise as to keep a respectful distance between the studio and Microsoft's intellectual property. He also notes that these rules and assurances from Microsoft are specific to Installation 01 as a project, and do not apply elsewhere. The studio hopes that the project can continue to be a positive driving force within the Halo community.
References
External links
Upcoming video games
Alien invasions in video games
First-person shooter multiplayer online games
Indie video games
Linux games
MacOS games
Science fiction video games
Video games about extraterrestrial life
Windows games |
1177467 | https://en.wikipedia.org/wiki/Telehealth | Telehealth | Telehealth is the distribution of health-related services and information via electronic information and telecommunication technologies. It allows long-distance patient and clinician contact, care, advice, reminders, education, intervention, monitoring, and remote admissions. Telemedicine is sometimes used as a synonym, or is used in a more limited sense to describe remote clinical services, such as diagnosis and monitoring. When rural settings, lack of transport, a lack of mobility, conditions due to outbreaks, epidemics or pandemics, decreased funding, or a lack of staff restrict access to care, telehealth may bridge the gap
as well as provide distance-learning; meetings, supervision, and presentations between practitioners; online information and health data management and healthcare system integration. Telehealth could include two clinicians discussing a case over video conference; a robotic surgery occurring through remote access; physical therapy done via digital monitoring instruments, live feed and application combinations; tests being forwarded between facilities for interpretation by a higher specialist; home monitoring through continuous sending of patient health data; client to practitioner online conference; or even videophone interpretation during a consult.
Telehealth versus telemedicine
Telehealth is sometimes discussed interchangeably with telemedicine, the latter being more common than the former. The Health Resources and Services Administration distinguishes telehealth from telemedicine in its scope, defining telemedicine only as describing remote clinical services, such as diagnosis and monitoring, while telehealth includes preventative, promotive, and curative care delivery. This includes the above-mentioned non-clinical applications, like administration and provider education.
The United States Department of Health and Human Services states that the term telehealth includes "non-clinical services, such as provider training, administrative meetings, and continuing medical education", and that the term telemedicine means "remote clinical services".
The World Health Organization uses telemedicine to describe all aspects of health care including preventive care. The American Telemedicine Association uses the terms telemedicine and telehealth interchangeably, although it acknowledges that telehealth is sometimes used more broadly for remote health not involving active clinical treatments.
eHealth is another related term, used particularly in the U.K. and Europe, as an umbrella term that includes telehealth, electronic medical records, and other components of health information technology.
Methods and modalities
Telehealth requires good Internet access by participants, usually in the form of a strong, reliable broadband connection, and broadband mobile communication technology of at least the fourth generation(4G) or long-term evolution (LTE) standard to overcome issues with video stability and bandwidth restrictions. As broadband infrastructure has improved, telehealth usage has become more widely feasible.
Healthcare providers often begin telehealth with a needs assessment which assesses hardships which can be improved by telehealth such as travel time, costs or time off work. Collaborators, such as technology companies can ease the transition.
Delivery can come within four distinct domains: live video (synchronous), store-and-forward (asynchronous), remote patient monitoring, and mobile health.
Store and forward
Store-and-forward telemedicine involves acquiring medical data (like medical images, biosignals etc.) and then transmitting this data to a doctor or medical specialist at a convenient time for assessment offline. It does not require the presence of both parties at the same time. Dermatology (cf: teledermatology), radiology, and pathology are common specialties that are conducive to asynchronous telemedicine. A properly structured medical record preferably in electronic form should be a component of this transfer. The 'store-and-forward' process requires the clinician to rely on a history report and audio/video information in lieu of a physical examination.
Remote monitoring
Remote monitoring, also known as self-monitoring or testing, enables medical professionals to monitor a patient remotely using various technological devices. This method is primarily used for managing chronic diseases or specific conditions, such as heart disease, diabetes mellitus, or asthma. These services can provide comparable health outcomes to traditional in-person patient encounters, supply greater satisfaction to patients, and may be cost-effective. Examples include home-based nocturnal dialysis and improved joint management.
Real-time interactive
Electronic consultations are possible through interactive telemedicine services which provide real-time interactions between patient and provider. Videoconferencing has been used in a wide range of clinical disciplines and settings for various purposes including management, diagnosis, counseling and monitoring of patients.
Videotelephony
Videotelephony comprises the technologies for the reception and transmission of audio-video signals by users at different locations, for communication between people in real-time.
At the dawn of the technology, videotelephony also included image phones which would exchange still images between units every few seconds over conventional POTS-type telephone lines, essentially the same as slow scan TV systems.
Currently, videotelephony is particularly useful to the deaf and speech-impaired who can use them with sign language and also with a video relay service, and well as to those with mobility issues or those who are located in distant places and are in need of telemedical or tele-educational services.
Categories
Emergency care
Common daily emergency telemedicine is performed by SAMU Regulator Physicians in France, Spain, Chile and Brazil. Aircraft and maritime emergencies are also handled by SAMU centres in Paris, Lisbon and Toulouse.
A recent study identified three major barriers to adoption of telemedicine in emergency and critical care units. They include:
regulatory challenges related to the difficulty and cost of obtaining licensure across multiple states, malpractice protection and privileges at multiple facilities
Lack of acceptance and reimbursement by government payers and some commercial insurance carriers creating a major financial barrier, which places the investment burden squarely upon the hospital or healthcare system.
cultural barriers occurring from the lack of desire, or unwillingness, of some physicians to adapt clinical paradigms for telemedicine applications.
Emergency Telehealth is also gaining acceptance in the United States. There are several modalities currently being practiced that include but are not limited to TeleTriage, TeleMSE and ePPE.
Medication-Assisted Treatment Through Telemedicine (Tele-MAT)
Medication assisted treatment (MAT) is the treatment of opioid use disorder (OUD) with medications, often in combination with behavioral therapy As a response to the COVID-19 pandemic the use of telemedicine has been granted by the Drug Enforcement Administration to start or maintain people OUD on buprenorphine (trade name Suboxone) via telemedicine without the need for an initial in-person examination. On March 31, 2020, QuickMD became the first national Tele-MAT service in the United States to provide Medication-assisted Treatment with Suboxone online—without the need of an in-person visit; with others announcing to follow soon.
Telenutrition
Telenutrition refers to the use of video conferencing/ telephony to provide online consultation by a nutritionist or dietician. Patient or clients upload their vital statistics, diet logs, food pictures etc. on TeleNutrition portal which are then used by nutritionist or dietician to analyze their current health condition. Nutritionist or dietician can then set goals for their respective client/ patients and monitor their progress regularly by follow-up consultations.
Telenutrition portals can help clients/ patients seek remote consultation for themselves and/or their family from the best nutritionist or dietician available across the globe. This can be extremely helpful for elderly/ bed ridden patients who can consult their dietician from comfort of their homes.
Telenutrition showed to be feasible and the majority of patients trusted the nutritional televisits, in place of the scheduled but not provided follow-up visits during the lockdown of the COVID-19 pandemic.
Telenursing
Telenursing refers to the use of telecommunications and information technology in order to provide nursing services in health care whenever a large physical distance exists between patient and nurse, or between any number of nurses. As a field it is part of telehealth, and has many points of contacts with other medical and non-medical applications, such as telediagnosis, teleconsultation, telemonitoring, etc.
Telenursing is achieving significant growth rates in many countries due to several factors: the preoccupation in reducing the costs of health care, an increase in the number of aging and chronically ill population, and the increase in coverage of health care to distant, rural, small or sparsely populated regions. Among its benefits, telenursing may help solve increasing shortages of nurses; to reduce distances and save travel time, and to keep patients out of hospital. A greater degree of job satisfaction has been registered among telenurses.
In Australia, during January 2014, Melbourne tech startup Small World Social collaborated with the Australian Breastfeeding Association to create the first hands-free breastfeeding Google Glass application for new mothers. The application, named Google Glass Breastfeeding app trial, allows mothers to nurse their baby while viewing instructions about common breastfeeding issues (latching on, posture etc.) or call a lactation consultant via a secure Google Hangout, who can view the issue through the mother's Google Glass camera. The trial was successfully concluded in Melbourne in April 2014, and 100% of participants were breastfeeding confidently.
Telepharmacy
Telepharmacy is the delivery of pharmaceutical care via telecommunications to patients in locations where they may not have direct contact with a pharmacist. It is an instance of the wider phenomenon of telemedicine, as implemented in the field of pharmacy. Telepharmacy services include drug therapy monitoring, patient counseling, prior authorization and refill authorization for prescription drugs, and monitoring of formulary compliance with the aid of teleconferencing or videoconferencing. Remote dispensing of medications by automated packaging and labeling systems can also be thought of as an instance of telepharmacy. Telepharmacy services can be delivered at retail pharmacy sites or through hospitals, nursing homes, or other medical care facilities.
The term can also refer to the use of videoconferencing in pharmacy for other purposes, such as providing education, training, and management services to pharmacists and pharmacy staff remotely.
Teledentistry
Teledentistry is the use of information technology and telecommunications for dental care, consultation, education, and public awareness in the same manner as telehealth and telemedicine.
Teleaudiology
Tele-audiology is the utilization of telehealth to provide audiological services and may include the full scope of audiological practice. This term was first used by Dr Gregg Givens in 1999 in reference to a system being developed at East Carolina University in North Carolina, USA.
Teleneurology
Teleneurology describes the use of mobile technology to provide neurological care remotely, including care for stroke, movement disorders like Parkinson's disease, seizure disorders (e.g., epilepsy), etc. The use of teleneurology gives us the opportunity to improve health care access for billions around the globe, from those living in urban locations to those in remote, rural locations. Evidence shows that individuals with Parkinson's disease prefer personal connection with a remote specialist to their local clinician. Such home care is convenient but requires access to and familiarity with internet. A 2017 randomized controlled trial of "virtual house calls" or video visits with individuals diagnosed with Parkinson disease evidences patient preference for the remote specialist vs their local clinician after one year. Teleneurology for patients with Parkison's disease is found to be cheaper than in person visits by reducing transportation and travel time A recent systematic review by Ray Dorsey et al. describes both the limitations and potential benefits of teleneurology to improve care for patients with chronic neurological conditions, especially in low-income countries. White, well educated and technologically savvy people are the biggest consumers of telehealth services for Parkinson's disease. as compared to ethnic minorities in USA
Teleneuropsychology
Teleneuropsychology (Cullum et al., 2014) is the use of telehealth/videoconference technology for the remote administration of neuropsychological tests. Neuropsychological tests are used to evaluate the cognitive status of individuals with known or suspected brain disorders and provide a profile of cognitive strengths and weaknesses. Through a series of studies, there is growing support in the literature showing that remote videoconference-based administration of many standard neuropsychological tests results in test findings that are similar to traditional in-person evaluations, thereby establishing the basis for the reliability and validity of teleneuropsychological assessment.
Telerehabilitation
Telerehabilitation (or e-rehabilitation) is the delivery of rehabilitation services over telecommunication networks and the Internet. Most types of services fall into two categories: clinical assessment (the patient's functional abilities in his or her environment), and clinical therapy. Some fields of rehabilitation practice that have explored telerehabilitation are: neuropsychology, speech-language pathology, audiology, occupational therapy, and physical therapy. Telerehabilitation can deliver therapy to people who cannot travel to a clinic because the patient has a disability or because of travel time. Telerehabilitation also allows experts in rehabilitation to engage in a clinical consultation at a distance.
Most telerehabilitation is highly visual. As of 2014, the most commonly used mediums are webcams, videoconferencing, phone lines, videophones and webpages containing rich web applications. The visual nature of telerehabilitation technology limits the types of rehabilitation services that can be provided. It is most widely used for neuropsychological rehabilitation; fitting of rehabilitation equipment such as wheelchairs, braces or artificial limbs; and in speech-language pathology. Rich web applications for neuropsychological rehabilitation (aka cognitive rehabilitation) of cognitive impairment (from many etiologies) were first introduced in 2001. This endeavor has expanded as a teletherapy application for cognitive skills enhancement programs for school children. Tele-audiology (hearing assessments) is a growing application. Currently, telerehabilitation in the practice of occupational therapy and physical therapy is limited, perhaps because these two disciplines are more "hands on".
Two important areas of telerehabilitation research are (1) demonstrating equivalence of assessment and therapy to in-person assessment and therapy, and (2) building new data collection systems to digitize information that a therapist can use in practice. Ground-breaking research in telehaptics (the sense of touch) and virtual reality may broaden the scope of telerehabilitation practice, in the future.
In the United States, the National Institute on Disability and Rehabilitation Research's (NIDRR) supports research and the development of telerehabilitation. NIDRR's grantees include the "Rehabilitation Engineering and Research Center" (RERC) at the University of Pittsburgh, the Rehabilitation Institute of Chicago, the State University of New York at Buffalo, and the National Rehabilitation Hospital in Washington DC. Other federal funders of research are the Veterans Health Administration, the Health Services Research Administration in the US Department of Health and Human Services, and the Department of Defense. Outside the United States, excellent research is conducted in Australia and Europe.
Only a few health insurers in the United States, and about half of Medicaid programs, reimburse for telerehabilitation services. If the research shows that teleassessments and teletherapy are equivalent to clinical encounters, it is more likely that insurers and Medicare will cover telerehabilitation services.
In India, the Indian Association of Chartered Physiotherapists (IACP) provides telerehabilitation facilities. With the support and collaboration of local clinics and private practitioners and the Members IACP, IACP runs the facility, named Telemedicine. IACP has maintained an internet-based list of their members on their website, through which patients can make online appointments.
Teletrauma care
Telemedicine can be utilized to improve the efficiency and effectiveness of the delivery of care in a trauma environment. Examples include:
Telemedicine for trauma triage: using telemedicine, trauma specialists can interact with personnel on the scene of a mass casualty or disaster situation, via the internet using mobile devices, to determine the severity of injuries. They can provide clinical assessments and determine whether those injured must be evacuated for necessary care. Remote trauma specialists can provide the same quality of clinical assessment and plan of care as a trauma specialist located physically with the patient.
Telemedicine for intensive care unit (ICU) rounds: Telemedicine is also being used in some trauma ICUs to reduce the spread of infections. Rounds are usually conducted at hospitals across the country by a team of approximately ten or more people to include attending physicians, fellows, residents and other clinicians. This group usually moves from bed to bed in a unit discussing each patient. This aids in the transition of care for patients from the night shift to the morning shift, but also serves as an educational experience for new residents to the team. A new approach features the team conducting rounds from a conference room using a video-conferencing system. The trauma attending, residents, fellows, nurses, nurse practitioners, and pharmacists are able to watch a live video stream from the patient's bedside. They can see the vital signs on the monitor, view the settings on the respiratory ventilator, and/or view the patient's wounds. Video-conferencing allows the remote viewers two-way communication with clinicians at the bedside.
Telemedicine for trauma education: some trauma centers are delivering trauma education lectures to hospitals and health care providers worldwide using video conferencing technology. Each lecture provides fundamental principles, firsthand knowledge and evidenced-based methods for critical analysis of established clinical practice standards, and comparisons to newer advanced alternatives. The various sites collaborate and share their perspective based on location, available staff, and available resources.
Telemedicine in the trauma operating room: trauma surgeons are able to observe and consult on cases from a remote location using video conferencing. This capability allows the attending to view the residents in real time. The remote surgeon has the capability to control the camera (pan, tilt and zoom) to get the best angle of the procedure while at the same time providing expertise in order to provide the best possible care to the patient.
Telecardiology
ECGs, or electrocardiographs, can be transmitted using telephone and wireless. Willem Einthoven, the inventor of the ECG, actually did tests with transmission of ECG via telephone lines. This was because the hospital did not allow him to move patients outside the hospital to his laboratory for testing of his new device. In 1906 Einthoven came up with a way to transmit the data from the hospital directly to his lab.
Transmission of ECGs
One of the oldest known telecardiology systems for teletransmissions of ECGs was established in Gwalior, India in 1975 at GR Medical college by Ajai Shanker, S. Makhija, P.K. Mantri using an indigenous technique for the first time in India.
This system enabled wireless transmission of ECG from the moving ICU van or the patients home to the central station in ICU of the department of Medicine. Transmission using wireless was done using frequency modulation which eliminated noise. Transmission was also done through telephone lines. The ECG output was connected to the telephone input using a modulator which converted ECG into high frequency sound. At the other end a demodulator reconverted the sound into ECG with a good gain accuracy. The ECG was converted to sound waves with a frequency varying from 500 Hz to 2500 Hz with 1500 Hz at baseline.
This system was also used to monitor patients with pacemakers in remote areas. The central control unit at the ICU was able to correctly interpret arrhythmia. This technique helped medical aid reach in remote areas.
In addition, electronic stethoscopes can be used as recording devices, which is helpful for purposes of telecardiology. There are many examples of successful telecardiology services worldwide.
In Pakistan three pilot projects in telemedicine were initiated by the Ministry of IT & Telecom, Government of Pakistan (MoIT) through the Electronic Government Directorate in collaboration with Oratier Technologies (a pioneer company within Pakistan dealing with healthcare and HMIS) and PakDataCom (a bandwidth provider). Three hub stations through were linked via the Pak Sat-I communications satellite, and four districts were linked with another hub. A 312 Kb link was also established with remote sites and 1 Mbit/s bandwidth was provided at each hub. Three hubs were established: the Mayo Hospital (the largest hospital in Asia), JPMC Karachi and Holy Family Rawalpindi. These 12 remote sites were connected and on average of 1,500 patients were treated per month per hub. The project was still running smoothly after two years.
Wireless ambulatory ECG technology, moving beyond previous ambulatory ECG technology such as the Holter monitor, now includes smartphones and Apple Watches which can perform at-home cardiac monitoring and send the data to a physician via the internet.
Telepsychiatry
Telepsychiatry, another aspect of telemedicine, also utilizes videoconferencing for patients residing in underserved areas to access psychiatric services. It offers wide range of services to the patients and providers, such as consultation between the psychiatrists, educational clinical programs, diagnosis and assessment, medication therapy management, and routine follow-up meetings. Most telepsychiatry is undertaken in real time (synchronous) although in recent years research at UC Davis has developed and validated the process of asynchronous telepsychiatry. Recent reviews of the literature by Hilty et al. in 2013, and by Yellowlees et al. in 2015 confirmed that telepsychiatry is as effective as in-person psychiatric consultations for diagnostic assessment, is at least as good for the treatment of disorders such as depression and post traumatic stress disorder, and may be better than in-person treatment in some groups of patients, notably children, veterans and individuals with agoraphobia.
As of 2011, the following are some of the model programs and projects which are deploying telepsychiatry in rural areas in the United States:
University of Colorado Health Sciences Center (UCHSC) supports two programs for American Indian and Alaskan Native populations
a. The Center for Native American Telehealth and Tele-education (CNATT) and
b. Telemental Health Treatment for American Indian Veterans with Post-traumatic Stress Disorder (PTSD)
Military Psychiatry, Walter Reed Army Medical Center.
In 2009, the South Carolina Department of Mental Health established a partnership with the University of South Carolina School of Medicine and the South Carolina Hospital Association to form a statewide telepsychiatry program that provides access to psychiatrists 16 hours a day, 7 days a week, to treat patients with mental health issues who present at rural emergency departments in the network.
Between 2007 and 2012, the University of Virginia Health System hosted a videoconferencing project that allowed child psychiatry fellows to conduct approximately 12,000 sessions with children and adolescents living in rural parts of the State.
There are a growing number of HIPAA- compliant technologies for performing telepsychiatry. There is an independent comparison site of current technologies.
Links for several sites related to telemedicine, telepsychiatry policy, guidelines, and networking are available at the website for the American Psychiatric Association.
There has also been a recent trend towards Video CBT sites with the recent endorsement and support of CBT by the National Health Service (NHS) in the United Kingdom.
In April 2012, a Manchester-based Video CBT pilot project was launched to provide live video therapy sessions for those with depression, anxiety, and stress related conditions called InstantCBT The site supported at launch a variety of video platforms (including Skype, GChat, Yahoo, MSN as well as bespoke) and was aimed at lowering the waiting times for mental health patients. This is a commercial, for-profit business.
In the United States, the American Telemedicine Association and the Center of Telehealth and eHealth are the most respectable places to go for information about telemedicine.
The Health Insurance Portability and Accountability Act (HIPAA) is a United States federal law that applies to all modes of electronic information exchange such as video-conferencing mental health services. In the United States, Skype, Gchat, Yahoo, and MSN are not permitted to conduct video-conferencing services unless these companies sign a business associate agreement stating that their employees are HIPAA-trained. For this reason, most companies provide their own specialized videotelephony services. Violating HIPAA in the United States can result in penalties of hundreds of thousands of dollars.
The momentum of telemental health and telepsychiatry is growing. In June 2012 the U.S. Veterans Administration announced expansion of the successful telemental health pilot. Their target was for 200,000 cases in 2012.
A growing number of HIPAA-compliant technologies are now available. There is an independent comparison site that provides a criteria-based comparison of telemental health technologies.
The SATHI Telemental Health Support project cited above is another example of successful telemental health support.
The COVID-19 pandemic has been associated with large increases in telemedicine visits in the United States for various behavioral and psychiatric conditions such as anxiety, bipolar disorder, depression, insomnia, opioid use disorder, and overactivity. During the first two quarters of 2020 (January to June), office-based visits decreased as telemedicine visits increased for these six conditions according to data from IQVIA.
Teleradiology
Teleradiology is the ability to send radiographic images (x-rays, CT, MR, PET/CT, SPECT/CT, MG, US...) from one location to another. For this process to be implemented, three essential components are required, an image sending station, a transmission network, and a receiving-image review station. The most typical implementation are two computers connected via the Internet. The computer at the receiving end will need to have a high-quality display screen that has been tested and cleared for clinical purposes. Sometimes the receiving computer will have a printer so that images can be printed for convenience.
The teleradiology process begins at the image sending station. The radiographic image and a modem or other connection are required for this first step. The image is scanned and then sent via the network connection to the receiving computer.
Today's high-speed broadband based Internet enables the use of new technologies for teleradiology: the image reviewer can now have access to distant servers in order to view an exam. Therefore, they do not need particular workstations to view the images; a standard personal computer (PC) and digital subscriber line (DSL) connection is enough to reach Keosys' central server. No particular software is necessary on the PC and the images can be reached from anywhere in the world.
Teleradiology is the most popular use for telemedicine and accounts for at least 50% of all telemedicine usage.
Telepathology
Telepathology is the practice of pathology at a distance. It uses telecommunications technology to facilitate the transfer of image-rich pathology data between distant locations for the purposes of diagnosis, education, and research. Performance of telepathology requires that a pathologist selects the video images for analysis and the rendering diagnoses. The use of "television microscopy", the forerunner of telepathology, did not require that a pathologist have physical or virtual "hands-on" involvement is the selection of microscopic fields-of-view for analysis and diagnosis.
A pathologist, Ronald S. Weinstein, M.D., coined the term "telepathology" in 1986. In an editorial in a medical journal, Weinstein outlined the actions that would be needed to create remote pathology diagnostic services. He, and his collaborators, published the first scientific paper on robotic telepathology. Weinstein was also granted the first U.S. patents for robotic telepathology systems and telepathology diagnostic networks. Weinstein is known to many as the "father of telepathology". In Norway, Eide and Nordrum implemented the first sustainable clinical telepathology service in 1989. This is still in operation, decades later. A number of clinical telepathology services have benefited many thousands of patients in North America, Europe, and Asia.
Telepathology has been successfully used for many applications including the rendering histopathology tissue diagnoses, at a distance, for education, and for research. Although digital pathology imaging, including virtual microscopy, is the mode of choice for telepathology services in developed countries, analog telepathology imaging is still used for patient services in some developing countries.
Teledermatology
Teledermatology allows dermatology consultations over a distance using audio, visual and data communication, and has been found to improve efficiency, access to specialty care, and patient satisfaction. Applications comprise health care management such as diagnoses, consultation and treatment as well as (continuing medical) education. The dermatologists Perednia and Brown were the first to coin the term teledermatology in 1995, where they described the value of a teledermatologic service in a rural area underserved by dermatologists.
Teleophthalmology
Teleophthalmology is a branch of telemedicine that delivers eye care through digital medical equipment and telecommunications technology. Today, applications of teleophthalmology encompass access to eye specialists for patients in remote areas, ophthalmic disease screening, diagnosis and monitoring; as well as distant learning. Teleophthalmology may help reduce disparities by providing remote, low-cost screening tests such as diabetic retinopathy screening to low-income and uninsured patients. In Mizoram, India, a hilly area with poor roads, between 2011 and 2015, teleophthalmology provided care to over 10,000 patients. These patients were examined by ophthalmic assistants locally but surgery was done on appointment after the patient images were viewed online by eye surgeons in the hospital 6–12 hours away. Instead of an average five trips for say, a cataract procedure, only one was required for surgery alone as even post-op care like removal of stitches and appointments for glasses was done locally. There were large cost savings in travel as well.
In the United States, some companies allow patients to complete an online visual exam and within 24 hours receive a prescription from an optometrist valid for eyeglasses, contact lenses, or both. Some US states such as Indiana have attempted to ban these companies from doing business.
Telesurgery
Remote surgery (also known as telesurgery) is the ability for a doctor to perform surgery on a patient even though they are not physically in the same location. It is a form of telepresence. Remote surgery combines elements of robotics, cutting-edge communication technology such as high-speed data connections, haptics and elements of management information systems. While the field of robotic surgery is fairly well established, most of these robots are controlled by surgeons at the location of the surgery.
Remote surgery is essentially advanced telecommuting for surgeons, where the physical distance between the surgeon and the patient is immaterial. It promises to allow the expertise of specialized surgeons to be available to patients worldwide, without the need for patients to travel beyond their local hospital.
Remote surgery or telesurgery is performance of surgical procedures where the surgeon is not physically in the same location as the patient, using a robotic teleoperator system controlled by the surgeon. The remote operator may give tactile feedback to the user. Remote surgery combines elements of robotics and high-speed data connections. A critical limiting factor is the speed, latency and reliability of the communication system between the surgeon and the patient, though trans-Atlantic surgeries have been demonstrated.
Teleabortion
Telemedicine has been used globally to increase access to abortion care, specifically medical abortion, in environments where few abortion care providers exist or abortion is legally restricted. Clinicians are able to virtually provide counseling, review screening tests, observe the administration of an abortion medication, and directly mail abortion pills to people. In 2004, Women on Web (WoW), Amsterdam, started offering online consultations, mostly to people living in areas where abortion was legally restricted, informing them how to safely use medical abortion drugs to end a pregnancy. People contact the Women on Web service online; physicians review any necessary lab results or ultrasounds, mail mifepristone and misoprostol pills to people, then follow up through online communication. In the United States, medical abortion was introduced as a telehealth service in Iowa by Planned Parenthood of the Heartland in 2008 to allow a patient at one health facility to communicate via secure video with a health provider at another facility. In this model a person seeking abortion care must come to a health facility. An abortion care provider communicates with the person located at another site using clinic-to-clinic videoconferencing to provide medical abortion after screening tests and consultation with clinic staff. In 2018, the website Aid Access was launched by the founder of Women on Web, Dr. Rebecca Gomperts. It offers a similar service as Women on Web in the United States, but the medications are prescribed to an Indian pharmacy, then mailed to the United States.
The TelAbortion study conducted by Gynuity Health Projects, with special approval from the U.S. Food and Drug Administration (FDA), aims to increase access to medical abortion care without requiring an in-person visit to a clinic. This models was expanded during the COVID-19 pandemic and as of March 2020 exists in 13 U.S. states and has enrolled over 730 people in the study. The person receives counseling and instruction from an abortion care provider via videoconference from a location of their choice. The medications necessary for the abortion, mifepristone and misoprostol, are mailed directly to the person and they have a follow-up video consultation in 7–14 days. A systematic review of telemedicine abortion has found the practice to be safe, effective, efficient, and satisfactory.
In the United States, eighteen states require the clinician to be physically present during the administration of medications for abortion which effectively bans telehealth of medication abortion: five states explicitly ban telemedicine for medication abortion, while thirteen states require the prescriber (usually required to be a physician) to be physically present with the patient. In the UK, the Royal College of Obstetricians and Gynecologists approved a no-test protocol for medication abortion, with mifepristone available through a minimal-contact pick-up or by mail.
Other specialist care delivery
Telemedicine can facilitate specialty care delivered by primary care physicians according to a controlled study of the treatment of hepatitis C. Various specialties are contributing to telemedicine, in varying degrees.
In light of the ongoing COVID-19 pandemic, primary care physicians have relied on telehealth to continue to provide care in outpatient settings. The transition to virtual health has been beneficial in providing patients access to care (especially care that does not require a physical exam e.g. medication changes, minor health updates) and avoid putting patients at risk of COVID-19.
Telemedicine has also been beneficial in facilitating medical education to students while still allowing for adequate social distancing during the COVID-19 pandemic. Many medical schools have shifted to alternate forms of virtual curriculum and are still able to engage in meaningful telehealth encounters with patients.
Major developments
In policy
Telehealth is a modern form of health care delivery. Telehealth breaks away from traditional health care delivery by using modern telecommunication systems including wireless communication methods. Traditional health is legislated through policy to ensure the safety of medical practitioners and patients. Consequently, since telehealth is a new form of health care delivery that is now gathering momentum in the health sector, many organizations have started to legislate the use of telehealth into policy. In New Zealand, the Medical Council has a statement about telehealth on their website. This illustrates that the medical council has foreseen the importance that telehealth will have on the health system and have started to introduce telehealth legislation to practitioners along with government.
Transition to mainstream
Traditional use of telehealth services has been for specialist treatment. However, there has been a paradigm shift and telehealth is no longer considered a specialist service. This development has ensured that many access barriers are eliminated, as medical professionals are able to use wireless communication technologies to deliver health care. This is evident in rural communities. For individuals living in rural communities, specialist care can be some distance away, particularly in the next major city. Telehealth eliminates this barrier as health professionals are able to conduct medical consultations through the use of wireless communication technologies. However, this process is dependent on both parties having Internet access.
Telehealth allows the patient to be monitored between physician office visits which can improve patient health. Telehealth also allows patients to access expertise which is not available in their local area. This remote patient monitoring ability enables patients to stay at home longer and helps avoid unnecessary hospital time. In the long-term, this could potentially result in less burdening of the healthcare system and consumption of resources.
During the COVID-19 pandemic, there were large increases in the use of telemedicine for primary care visits within the United States, increasing from an average of 1.4 million visits in Q2 of 2018 and 2019 to 35 million visits in Q2 2020, according to data from IQVIA. The telehealth market is expected to grow at 40% a year in 2021. Use of telemedicine by General Practitioners in the UK rose from 20%-30% pre-COVID to almost 80% by the beginning of 2021. More than 70% of practitioners and patients were satisfied with this. Boris Johnson was said to have ‘piled pressure on GPs to offer more in-person consultations’ supporting a campaign largely orchestrated by the Daily Mail. The Royal College of General Practitioners said that a patient ‘right’ to have face-to-face appointments if they wished was ‘undeliverable’.
Technology advancement
The technological advancement of wireless communication devices is a major development in telehealth. This allows patients to self-monitor their health conditions and to not rely as much on health care professionals. Furthermore, patients are more willing to stay on their treatment plans as they are more invested and included in the process as the decision-making is shared. Technological advancement also means that health care professionals are able to use better technologies to treat patients for example in surgery. Technological developments in telehealth are essential to improve health care, especially the delivery of healthcare services, as resources are finite along with an ageing population that is living longer.
Licensing
U.S. licensing and regulatory issues
Restrictive licensure laws in the United States require a practitioner to obtain a full license to deliver telemedicine care across state lines. Typically, states with restrictive licensure laws also have several exceptions (varying from state to state) that may release an out-of-state practitioner from the additional burden of obtaining such a license. A number of states require practitioners who seek compensation to frequently deliver interstate care to acquire a full license.
If a practitioner serves several states, obtaining this license in each state could be an expensive and time-consuming proposition. Even if the practitioner never practices medicine face-to-face with a patient in another state, he/she still must meet a variety of other individual state requirements, including paying substantial licensure fees, passing additional oral and written examinations, and traveling for interviews.
In 2008, the U.S. passed the Ryan Haight Act which required face-to-face or valid telemedicine consultations prior to receiving a prescription.
State medical licensing boards have sometimes opposed telemedicine; for example, in 2012 electronic consultations were illegal in Idaho, and an Idaho-licensed general practitioner was punished by the board for prescribing an antibiotic, triggering reviews of her licensure and board certifications across the country. Subsequently, in 2015 the state legislature legalized electronic consultations.
In 2015, Teladoc filed suit against the Texas Medical Board over a rule that required in-person consultations initially; the judge refused to dismiss the case, noting that antitrust laws apply to state medical boards.
Major implications and impacts
Telehealth allows multiple, varying disciplines to merge and deliver a potentially more uniform level of care, using technology. As telehealth proliferates mainstream healthcare, it challenges notions of traditional healthcare delivery. Some populations experience better quality, access and more personalized health care.
Health promotion
Telehealth can also increase health promotion efforts. These efforts can now be more personalised to the target population and professionals can extend their help into homes or private and safe environments in which patients of individuals can practice, ask and gain health information. Health promotion using telehealth has become increasingly popular in underdeveloped countries where there are very poor physical resources available. There has been a particular push toward mHealth applications as many areas, even underdeveloped ones have mobile phone and smartphone coverage.
In developed countries, health promotion efforts using telehealth have been met with some success. The Australian hands-free breastfeeding Google Glass application reported promising results in 2014. This application made in collaboration with the Australian Breastfeeding Association and a tech startup called Small World Social, helped new mothers learn how to breastfeed. Breastfeeding is beneficial to infant health and maternal health and is recommended by the World Health Organisation and health organisations all over the world. Widespread breastfeeding can prevent 820,000 infant deaths globally but the practice is often stopped prematurely or intents to do are disrupted due to lack of social support, know-how or other factors. This application gave mother's hands-free information on breastfeeding, instructions on how to breastfeed and also had an option to call a lactation consultant over Google Hangout. When the trial ended, all participants were reported to be confident in breastfeeding.
Health care quality and barriers to adoption
A scientific review indicates that, in general, outcomes of telemedicine are or can be as good as in-person care with health care use staying similar.
Advantages of the nonexclusive adoption of already existing telemedicine technologies such as smartphone videotelephony may include reduced infection risks, increased control of disease during epidemic conditions, improved access to care, reduced stress and exposure to other pathogens during illness for better recovery, reduced time and labor costs, efficient more accessible matching of patients with particular symptoms and clinicians who are experts for such, and reduced travel while disadvantages may include privacy breaches (e.g. due to software backdoors and vulnerabilities or sale of data), dependability on Internet access and, depending on various factors, increased health care use.
Theoretically, the whole health system could benefit from telehealth. There are indications telehealth consumes fewer resources and requires fewer people to operate it with shorter training periods to implement initiatives. Commenters suggested that lawmakers may fear that making telehealth widely accessible, without any other measures, would lead to patients using unnecessary health care services. Telemedicine could also be used for connected networks between health care professionals.
Telemedicine also can eliminate the possible transmission of infectious diseases or parasites between patients and medical staff. This is particularly an issue where MRSA is a concern. Additionally, some patients who feel uncomfortable in a doctors office may do better remotely. For example, white coat syndrome may be avoided. Patients who are home-bound and would otherwise require an ambulance to move them to a clinic are also a consideration.
However, whether or not the standard of health care quality is increasing is debatable, with some literature refuting such claims. Research has reported that clinicians find the process difficult and complex to deal with. Furthermore, there are concerns around informed consent, legality issues as well as legislative issues. Although health care may become affordable with the help of technology, whether or not this care will be "good" is the issue. Many patient experience studies indicate high satisfaction with telemedicine.
Major problems with increasing adoption include technically challenged staff, resistance to change or habits and age of patient. Focused policy could eliminate several barriers.
A review lists a number of potentially good practices and pitfalls, recommending the use of "virtual handshakes" for confirming identity, taking consent for conducting remote consultation over a conventional meeting, and professional standardized norms for protecting patient privacy and confidentiality. It also found that the COVID-19 pandemic substantially increased the, voluntarily, adoption of telephone or video consultation and suggests that telemedicine technology "is a key factor in delivery of health care in the future".
Economic evaluations
Due to its digital nature it is often assumed that telehealth saves the health system money. However, the evidence to support this is varied. When conducting economic evaluations of telehealth services, the individuals evaluating them need to be aware of potential outcomes and extraclinical benefits of the telehealth service. Economic viability relies on the funding model within the country being examined (public vs private), the consumers willingness-to-pay, and the expected remuneration by the clinicians or commercial entities providing the services (examples of research on these topics from teledermoscopy in Australia)
In a UK telehealth trial done in 2011, it was reported that the cost of health could be dramatically reduced with the use of telehealth monitoring. The usual cost of in vitro fertilisation (IVF) per cycle would be around $15,000; with telehealth it was reduced to $800 per patient. In Alaska the Federal Health Care Access Network, which connects 3,000 healthcare providers to communities, engaged in 160,000 telehealth consultations from 2001 and saved the state $8.5 million in travel costs for just Medicaid patients.
Beneficial enablements
Telemedicine can be beneficial to patients in isolated communities and remote regions, who can receive care from doctors or specialists far away without the patient having to travel to visit them. Recent developments in mobile collaboration technology can allow healthcare professionals in multiple locations to share information and discuss patient issues as if they were in the same place. Remote patient monitoring through mobile technology can reduce the need for outpatient visits and enable remote prescription verification and drug administration oversight, potentially significantly reducing the overall cost of medical care. It may also be preferable for patients with limited mobility, for example, patients with Parkinson's disease. Telemedicine can also facilitate medical education by allowing workers to observe experts in their fields and share best practices more easily.
Nonclinical uses
Distance education including continuing medical education, grand rounds, and patient education
administrative uses including meetings among telehealth networks, supervision, and presentations
research on telehealth
online information and health data management
healthcare system integration
asset identification, listing, and patient to asset matching, and movement
overall healthcare system management
patient movement and remote admission
Physical distancing to prevent transmission of communicable diseases
Limitations and restrictions
While many branches of medicine have wanted to fully embrace telehealth for a long time, there are certain risks and barriers which bar the full amalgamation of telehealth into best practice. For a start, it is dubious as to whether a practitioner can fully leave the "hands-on" experience behind. Although it is predicted that telehealth will replace many consultations and other health interactions, it cannot yet fully replace a physical examination, this is particularly so in diagnostics, rehabilitation or mental health.
The benefits posed by telehealth challenge the normative means of healthcare delivery set in both legislation and practice. Therefore, the growing prominence of telehealth is starting to underscore the need for updated regulations, guidelines and legislation which reflect the current and future trends of healthcare practices. Telehealth enables timely and flexible care to patients wherever they may be; although this is a benefit, it also poses threats to privacy, safety, medical licensing and reimbursement. When a clinician and patient are in different locations, it is difficult to determine which laws apply to the context. Once healthcare crosses borders different state bodies are involved in order to regulate and maintain the level of care that is warranted to the patient or telehealth consumer. As it stands, telehealth is complex with many grey areas when put into practice especially as it crosses borders. This effectively limits the potential benefits of telehealth.
An example of these limitations include the current American reimbursement infrastructure, where Medicare will reimburse for telehealth services only when a patient is living in an area where specialists are in shortage, or in particular rural counties. The area is defined by whether it is a medical facility as opposed to a patient's' home. The site that the practitioner is in, however, is unrestricted. Medicare will only reimburse live video (synchronous) type services, not store-and-forward, mhealth or remote patient monitoring (if it does not involve live-video). Some insurers currently will reimburse telehealth, but not all yet. So providers and patients must go to the extra effort of finding the correct insurers before continuing. Again in America, states generally tend to require that clinicians are licensed to practice in the surgery' state, therefore they can only provide their service if licensed in an area that they do not live in themselves.
More specific and widely reaching laws, legislations and regulations will have to evolve with the technology. They will have to be fully agreed upon, for example, will all clinicians need full licensing in every community they provide telehealth services too, or could there be a limited use telehealth licence? Would the limited use licence cover all potential telehealth interventions, or only some? Who would be responsible if an emergency was occurring and the practitioner could not provide immediate help – would someone else have to be in the room with the patient at all consult times? Which state, city or country would the law apply in when a breach or malpractice occurred?
A major legal action prompt in telehealth thus far has been issues surrounding online prescribing and whether an appropriate clinician-patient relationship can be established online to make prescribing safe, making this an area that requires particular scrutiny. It may be required that the practitioner and patient involved must meet in person at least once before online prescribing can occur, or that at least a live-video conference must occur, not just impersonal questionnaires or surveys to determine need.
The downsides of telemedicine include the cost of telecommunication and data management equipment and of technical training for medical personnel who will employ it. Virtual medical treatment also entails potentially decreased human interaction between medical professionals and patients, an increased risk of error when medical services are delivered in the absence of a registered professional, and an increased risk that protected health information may be compromised through electronic storage and transmission. There is also a concern that telemedicine may actually decrease time efficiency due to the difficulties of assessing and treating patients through virtual interactions; for example, it has been estimated that a teledermatology consultation can take up to thirty minutes, whereas fifteen minutes is typical for a traditional consultation. Additionally, potentially poor quality of transmitted records, such as images or patient progress reports, and decreased access to relevant clinical information are quality assurance risks that can compromise the quality and continuity of patient care for the reporting doctor. Other obstacles to the implementation of telemedicine include unclear legal regulation for some telemedical practices and difficulty claiming reimbursement from insurers or government programs in some fields.
Another disadvantage of telemedicine is the inability to start treatment immediately. For example, a patient suffering from a bacterial infection might be given an antibiotic hypodermic injection in the clinic, and observed for any reaction, before that antibiotic is prescribed in pill form.
We must also be wary of equitability. Many families and individuals in the United States do not have internet access in their homes. Not to mention they may lack the necessary equipment to access telehealth services, such as a laptop, tablet, or smart phone.
Ethical issues
Informed consent is another issue – should the patient give informed consent to receive online care before it starts? Or will it be implied if it is care that can only practically be given over distance? When telehealth includes the possibility for technical problems such as transmission errors, security breaches, or storage issues, it can impact the system's ability to communicate. It may be wise to obtain informed consent in person first, as well as having backup options for when technical issues occur. In person, a patient can see who is involved in their care (namely themselves and their clinician in a consult), but online there will be other involved such as the technology providers, therefore consent may need to involve disclosure of anyone involved in the transmission of the information and the security that will keep their information private, and any legal malpractice cases may need to involve all of those involved as opposed to what would usually just be the practitioner.
The state of the market
The rate of adoption of telehealth services in any jurisdiction is frequently influenced by factors such as the adequacy and cost of existing conventional health services in meeting patient needs; the policies of governments and/or insurers with respect to coverage and payment for telehealth services; and medical licensing requirements that may inhibit or deter the provision of telehealth second opinions or primary consultations by physicians.
Projections for the growth of the telehealth market are optimistic, and much of this optimism is predicated upon the increasing demand for remote medical care. According to a recent survey, nearly three-quarters of U.S. consumers say they would use telehealth. At present, several major companies along with a bevvy of startups are working to develop a leading presence in the field.
In the UK, the Government's Care Services minister, Paul Burstow, has stated that telehealth and telecare would be extended over the next five years (2012–2017) to reach three million people.
United States
In the United States, telemedicine companies are collaborating with health insurers and other telemedicine providers to expand marketshare and patient access to telemedicine consultations.
As of 2019, 95% of employers believe their organizations will continue to provide health care benefits over the next five years.
The COVID-19 pandemic drove increased usage of Telehealth services in the U.S. The U.S. Centers for Disease Control and Prevention reported a 154% increase in telehealth visits during the last week of March 2020, compared to the same dates in 2019.
Switzerland
From 1999 to 2018, the University Hospital of Zurich (USZ) offered clinical telemedicine and online medical advice on the Internet. A team of doctors answered around 2500 anonymous inquiries annually, usually within 24 to 48 hours. The team consisted of up to six physicians who are specialists in clinical telemedicine at the USZ and have many years of experience, particularly in internal and general medicine. In the entire period, 59360 inquiries were sent and answered. The majority of the users were female and on average 38 years old. However, in the course of time, considerably more men and older people began to use the service. The diversity of medical queries covered all categories of the International Statistical Classification of Diseases and Related Health Problems (ICD) and correlated with the statistical frequency of diseases in hospitals in Switzerland. Most of the inquiries concerned unclassified symptoms and signs, services related to reproduction, respiratory diseases, skin diseases, health services, diseases of the eye and nervous systems, injuries and disorders of the female genital tract. As with the Swedish online medical advice service, one-sixth of the requests related to often shameful and stigmatised diseases of the genitals, gastrointestinal tract, sexually transmitted diseases, obesity and mental disorders. By providing an anonymous space where users can talk about (shameful) diseases, online telemedical services empower patients and their health literacy is enhanced by providing individualized health information. The Clinical Telemedicine and Online Counselling service of the University Hospital of Zurich is currently being revised and will be offered in a new form in the future.
Developing countries
For developing countries, telemedicine and eHealth can be the only means of healthcare provision in remote areas. For example, the difficult financial situation in many African states and lack of trained health professionals has meant that the majority of the people in sub-Saharan Africa are badly disadvantaged in medical care, and in remote areas with low population density, direct healthcare provision is often very poor However, provision of telemedicine and eHealth from urban centers or from other countries is hampered by the lack of communications infrastructure, with no landline phone or broadband internet connection, little or no mobile connectivity, and often not even a reliable electricity supply.
Similarly India has broad rural-urban population and rural India is bereaved from medical facilities, giving telemedicine a space for growth in India. Deprived education and medical professionals in rural areas is the reason behind government's ideology to use technology to bridge this gap. Remote areas not only present a number of challenges for the service providers but also for the families who are accessing these services. Since 2018, telemedicine has expanded in India. It has undertaken a new way for doctor consultations. On 25 March 2020, in the wake of COVID-19 pandemic, the Ministry of Health and Family Welfare issued India's Telemedicine Practice Guidelines. The Board of Governors entasked by the Health Ministry published an amendment to the Indian Medical Council (Professional Conduct, Etiquette and Ethics) Regulations, 2002 that gave much-needed statutory support for the practice of telemedicine in India. This sector is at an ever-growing stage with high scope of development. In April 2020, the union health ministry launched the eSanjeevani telemedicine service that operates at two levels: the doctor-to-doctor telemedicine platform, and the doctor-to-patient platform. This service crossed five million tele-consultations within a year of its launch indicating conducive environment for acceptability and growth of telemedicine in India.
The Satellite African eHEalth vaLidation (SAHEL) demonstration project has shown how satellite broadband technology can be used to establish telemedicine in such areas. SAHEL was started in 2010 in Kenya and Senegal, providing self-contained, solar-powered internet terminals to rural villages for use by community nurses for collaboration with distant health centers for training, diagnosis and advice on local health issues
In 2014, the government of Luxembourg, along with satellite operator, SES and NGOs, Archemed, Fondation Follereau, Friendship Luxembourg, German Doctors and Médecins Sans Frontières, established SATMED, a multilayer eHealth platform to improve public health in remote areas of emerging and developing countries, using the Emergency.lu disaster relief satellite platform and the Astra 2G TV satellite. SATMED was first deployed in response to a report in 2014 by German Doctors of poor communications in Sierra Leone hampering the fight against Ebola, and SATMED equipment arrived in the Serabu clinic in Sierra Leone in December 2014. In June 2015 SATMED was deployed at Maternité Hospital in Ahozonnoude, Benin to provide remote consultation and monitoring, and is the only effective communication link between Ahozonnoude, the capital and a third hospital in Allada, since land routes are often inaccessible due to flooding during the rainy season.
History
The development and history of telehealth or telemedicine (terms used interchangeably in literature) is deeply rooted in the history and development in not only technology but also society itself. Humans have long sought to relay important messages through torches, optical telegraphy, electroscopes, and wireless transmission. Early forms of telemedicine achieved with telephone and radio have been supplemented with videotelephony, advanced diagnostic methods supported by distributed client/server applications, and additionally with telemedical devices to support in-home care.
In the 21st century, with the advent of the internet, portable devices and other such digital devices are taking a transformative role in healthcare and its delivery.
Earliest instances
Although, traditional medicine relies on in-person care, the need and want for remote care has existed from the Roman and pre-Hippocratic periods in antiquity. The elderly and infirm who could not visit temples for medical care sent representatives to convey information on symptoms and bring home a diagnosis as well as treatment. In Africa, villagers would use smoke signals to warn neighboring villages of disease outbreak. The beginnings of telehealth have existed through primitive forms of communication and technology. The exact date of origin for Telehealth is unknown, but it was known to have been used during the Bubonic Plague. That version of Telehealth was far different from how we know it today. During that time, they were communicating by heliograph and bonfire. Those were used to notify other groups of people about famine and war. Those are not using any form of technology yet but are starting to spread the idea of connectivity among groups of people who geographically couldn’t be together.
1800s to early 1900s
As technology developed and wired communication became increasingly commonplace, the ideas surrounding telehealth began emerging. The earliest telehealth encounter can be traced to Alexander Graham Bell in 1876, when he used his early telephone as a means of getting help from his assistant Mr. Watson after he spilt acid on his trousers. Another instance of early telehealth, specifically telemedicine was reported in The Lancet in 1879. An anonymous writer described a case where a doctor successfully diagnosed a child over the telephone in the middle of the night. This Lancet issue, also further discussed the potential of Remote Patient Care in order to avoid unnecessary house visits, which were part of routine health care during the 1800s. Other instances of telehealth during this period came from the American Civil War, during which telegraphs were used to deliver mortality lists and medical care to soldiers. As the 1900s started, physicians quickly found a use for the telephone making it a prime communication channel to contact patients and other physicians. Over the next fifty-plus years, the telephone was a staple for medical communication. As the 1930s came around, radio communication played a key role, especially during World War I. It was specifically used to communicate with remote areas such as Alaska and Australia. They used the radio to communicate medical information. During the Vietnam War, radio communication had become more advanced and was now used to send medical teams in helicopters to help. This then brought together the Aerial Medical Service (AMS) who used telegraphs, radios, and planes to help care for people who lived in remote areas.
From the late 1800s to the early 1900s the early foundations of wireless communication were laid down. Radios provided an easier and near instantaneous form of communication. The use of radio to deliver healthcare became accepted for remote areas. The Royal Flying Doctor Service of Australia is an example of the early adoption of radios in telehealth.
In 1925 the inventor Hugo Gernsback wrote an article for the magazine Science and Invention which included a prediction of a future where patients could be treated remotely by doctors through a device he called a "teledactyl". His descriptions of the device are similar to what would later become possible with new technology.
Mid-1900s to 1980s
When the American National Aeronautics and Space Administration (NASA), began plans to send astronauts into space, the need for telemedicine became clear. In order to monitor their astronauts in space, telemedicine capabilities were built into the spacecraft as well as the first spacesuits. Additionally, during this period, telehealth and telemedicine were promoted in different countries especially the United States and Canada. After the telegraph and telephone started to successfully help physicians treat patients from remote areas, telehealth became more recognized. Technological advancements occurred when the National Aeronautics and Space Administration (NASA) sent men to space. Engineers for NASA created biomedical telemetry and telecommunications systems. NASA technology monitored vitals such as blood pressure, heart rate, respiration rate, and temperature. After the technology was created, it then became the base of telehealth medicine for the public.
Massachusetts General Hospital and Boston's Logan International Airport had a role in the early use of telemedicine, which more or less coincided with NASA's foray into telemedicine through the use of physiologic monitors for astronauts. On October 26, 1960, a plane struck a flock of birds upon takeoff, killing many passengers and leaving a number wounded. Due to the extreme complexity of trying to get all the medical personnel out from the hospital, the practical solution became telehealth. This was expanded upon in 1967, when Kenneth Bird at Massachusetts General founded one of the first telemedicine clinics. The clinic addressed the fundamental problem of delivering occupational and emergency health services to employees and travellers at the airport, located three congested miles from the hospital. Clinicians at the hospital would provide consultation services to patients who were at the airport. Consultations were achieved through microwave audio as well as video links. The airport began seeing over a hundred patients a day at its nurse-run clinic that cared for victims of plane crashes and other accidents, taking vital signs, electrocardiograms, and video images that were sent to Massachusetts General. Over 1,000 patients are documented as having received remote treatment from doctors at MGH using the clinic's two-way audiovisual microwave circuit. One notable story featured a woman who got off a flight in Boston and was experiencing chest pain. They performed a workup at the airport, took her to the telehealth suite where Dr. Raymond Murphy appeared on the television, and had a conversation with her. While this was happening, another doctor took notes and the nurses took vitals and any test that Dr. Murphy ordered. At this point, telehealth was becoming more mainstream and was starting to become more technologically advanced, which created a viable option for patients.
In 1964, the Nebraska Psychiatric Institute began using television links to form two-way communication with the Norfolk State Hospital which was 112 miles away for the education and consultation purposes between clinicians in the two locations.
In 1972 the Department of Health, Education and Welfare in the United States approved funding for seven telemedicine projects across different states. This funding was renewed and two further projects were funded the following year.
1980s to 1990s – maturation and renaissance
Telehealth projects underway before and during the 1980s would take off but fail to enter mainstream healthcare. As a result, this period of telehealth history is called the "maturation" stage and made way for sustainable growth. Although state funding in North America was beginning to run low, different hospitals began to launch their own telehealth initiatives. NASA provided an ATS-3 satellite to enable medical care communications of American Red Cross and Pan American Health Organization response teams, following the 1985 Mexico City earthquake. The agency then launched its SateLife/HealthNet programme to increase health service connectivity in developing countries. In 1997, NASA sponsored Yale's Medical Informatics and Technology Applications Consortium project.
Florida first experimented with "primitive" telehealth in its prisons during the latter 1980s. Working with Doctors Oscar W. Boultinghouse and Michael J. Davis, from the early 1990s to 2007; Glenn G. Hammack led the University of Texas Medical Branch (UTMB) development of a pioneering telehealth program in Texas state prisons. The three UTMB alumni would, in 2007, co-found telehealth provider NuPhysician.
The first interactive telemedicine system, operating over standard telephone lines, designed to remotely diagnose and treat patients requiring cardiac resuscitation (defibrillation) was developed and launched by an American company, MedPhone Corporation, in 1989. A year later under the leadership of its President/CEO S Eric Wachtel, MedPhone introduced a mobile cellular version, the MDPhone. Twelve hospitals in the U.S. served as receiving and treatment centers.
At-home virtual care
As the expansion of telehealth continued in 1990 Maritime Health Services (MHS) was a big part of the initiation for occupational health services. They sent a medical officer aboard the Pacific trawler that allowed for round-the-clock communication with a physician. The system that allows for this is called the Medical Consultation Network (MedNet). MedNet is a video chatting system that has live audio and visual so the physician on the other end of the call can see and hear what is happening. MetNet can be used from anywhere, not just aboard ships. Being able to provide onsite visual information allows remote patients expert emergency help and medical attention that saves money as well as lives. This has created a demand for at-home monitoring. At-home care has also become a large part of telehealth. Doctors or nurses will now give pre-op and post-op phone calls to check-in. There are also companies such as Lifeline, which give the elderly a button to press in case of an emergency. That button will automatically call for emergency help. If someone has surgery and then is sent home, telehealth allows physicians to see how the patient is progressing without them having to stay in the hospital. TeleDiagnostic Systems of San Francisco is a company that has created a device that monitors sleep patterns, so people who suffer from sleep disorders do not have to stay the night at the hospital. Another at-home device that was created was the Wanderer, which was attached to Alzheimer's patients or people who suffered from dementia. It was attached to them so when they wandered off it notified the staff to allow them to go after them. All these devices allowed healthcare beyond hospitals to improve, which means that more people are being helped efficiently.
2000s to present
The advent of high-speed Internet, and the increasing adoption of ICT in traditional methods of care, spurred advances in telehealth delivery. Increased access to portable devices, like laptops and mobile phones, made telehealth more plausible; the industry then expanded into health promotion, prevention and education.
In 2002, Dr. G. Byron Brooks, a former NASA surgeon and engineer who had also helped manage the UTMB Telemedicine program, co-founded Teladoc in Dallas, Texas, which was then launched in 2005 as the first national telehealth provider.
In the 2010s, integration of smart home telehealth technologies, such as health and wellness devices, software, and integrated IoT, has accelerated the industry. Healthcare organizations are increasingly adopting the use of self-tracking and cloud-based technologies, and innovative data analytic approaches to accelerate telehealth delivery.
In 2015, Mercy Health system opened Mercy Virtual, in Chesterfield, Missouri, the world's first medical facility dedicated solely to telemedicine.
COVID-19
Telehealth is playing a significant role amidst the COVID-19 pandemic. With the pandemic, telehealth has become a vital means of medical communication. It allows doctors to return to humanizing the patient. It forces them to listen to what people have to say and from there make a diagnosis. Some researchers claim this creates an environment that encourages greater vulnerability among patients in self disclosure in the practice of narrative medicine. Telehealth allows for Zoom calls and video chats from across the world checking in on patients and speaking to physicians. Universities are now ensuring that medical students are coming out of school with proficient telehealth communication skills. Experts suggest telehealth will be a vital part of medicine; with more virtual options becoming available, the public is now able to pick whether they want to stay home or go into the office.
See also
Artificial intelligence in healthcare
American Telemedicine Association
American Well
Center for Telehealth and E-Health Law
Connected health
eHealth
In absentia health care
Mercy Virtual
mHealth
Myca
National Rural Health Association
Data sharing between doctors through mixed reality headsets
Ontario Telemedicine Network
Remote therapy
Ronald S. Weinstein
Smart city
Tele-epidemiology
Teladoc
Telecare
Telemedicine service providers
Telemental health
Teleneuropsychology
Telenursing
Telepathology
Telepsychology
UNESCO Chair in Telemedicine
References
Further reading
Results from clinical trial carried out by the UK Government's Department of Health
Online introduction and primer to telehealth and telemedicine
- A document to assist in the planning of telehealth and telemedicine projects for rural community and migrant health centers and other health care organizations.
External links
Telecommunication services
Health informatics
Technology in society
Assistive technology |
956887 | https://en.wikipedia.org/wiki/Mark%20Abene | Mark Abene | Mark T. Abene (born February 23, 1972) is an American infosec expert and entrepreneur, originally from New York City. Better known by his pseudonym Phiber Optik, he was once a member of the hacker groups Legion of Doom and Masters of Deception.
Phiber Optik was a high-profile hacker in the 1980s and early 1990s, appearing in The New York Times, Harper's, Esquire, in debates and on television. He is an important figure in the 1995 non-fiction book, Masters of Deception — The Gang that Ruled Cyberspace ().
Early life
Abene's first contact with computers was at around 9 years of age at a local department store, where he would often pass the time while his parents shopped. His first computer was a TRS-80 MC-10 with 4 kilobytes of RAM, a 32-column screen, no lower case, and a cassette tape recorder to load and save programs. As was customary at the time, the computer connected to a television set for use as a monitor.
After receiving the gifts of a RAM upgrade (to 20K) and a 300 baud modem from his parents, he used his computer to access CompuServe and shortly after discovered the world of dialup BBSes via people he met on CompuServe's "CB simulator", the first nationwide online chat. On some of these BBSes, Abene discovered dialups and guest accounts to DEC minicomputers running the RSTS/E and TOPS-10 operating systems as part of the BOCES educational program in Long Island, New York. Accessing those DEC minicomputers he realized there was a programming environment that was much more powerful than that of his own home computer, and so he began taking books out of the library in order to learn the programming languages that were now available to him. This and the ability to remotely save and load back programs that would still be there the next time he logged in had a profound effect on Abene, who came to view his rather simple computer as a window into a much larger world.
Having learned about programming and fundamental security concepts during those early years, Abene further honed his skill in understanding the intricacies of the nationwide telephone network. In the mid-1980s he was first introduced to members of the Legion of Doom (LOD), a loosely knit group of highly respected teenage hackers who shared Abene's uncompromising desire to understand technology.
Their main focus was to explore telecommunications systems, minicomputer and mainframe operating systems and large-scale packet data networks. The eventual decline of the LOD toward the late 1980s largely due to fragmentation and dissension within the group, coupled with the legal prosecution of a handful of its members, caused Abene to increasingly align himself with a local group of up-and-coming hackers, who came to be known as the Masters of Deception (MOD).
Legal tribulations
On January 24, 1990, Abene and other MOD members had their homes searched and property seized by the U.S. Secret Service largely based on government suspicions of having caused AT&T Corporation's network crash just over a week earlier on January 15 (Abene was personally accused by the Secret Service of having done as much, during the search and seizure). Some weeks later, AT&T themselves admitted that the crash was the result of a flawed software update to the switching systems on their long-distance network, thus, human error on their part. In February 1991, Abene was arrested and charged with computer tampering and computer trespass in the first degree, New York state offenses. Laws at the time were considered a “gray area” concerning information security. Abene, who was a minor at the time, pleaded "not guilty" to the first two offenses and ultimately accepted a plea agreement to a lesser misdemeanor charge, and was sentenced to 35 hours of community service.
Abene and four other members of the Masters of Deception were also arrested in December 1991 and indicted by a Manhattan federal grand jury on July 8, 1992, on an 11-count charge. The indictment relied heavily on evidence collected by court-approved wire tapping of telephone conversations between MOD members. According to U.S. Attorney Otto Obermaier, it was the "first investigative use of court-authorized wiretaps to obtain conversations and data transmissions of computer hackers" in the United States.
According to a July 9, 1992 newsletter from the Electronic Frontier Foundation, the defendants faced a maximum term of 50 years in prison and fines of $2.5 million if found guilty on all counts. Despite the fact that Abene was a minor at the time the crimes were allegedly committed, was only involved in a small fraction of the sub-charges, and often in a passive way, a plea arrangement resulted in by far the harshest sentence: 12 months imprisonment, three years probation and 600 hours of community service.
After serving the one-year sentence at the Federal Prison "Camp" in Schuylkill, Pennsylvania, Abene was released in November 1994. In January 1995, a huge celebration called "Phiberphest '95" was held in his honor at Manhattan's Irving Plaza ballroom/nightclub. In Time, Joshua Quittner called him "the first underground hero of the Information Age, the Robin Hood of cyberspace." Upon leaving jail, Phiber Optik made the @Cafe his hang out spot.
Social protests
Many people inside and outside of the hacker world felt that Abene was made an example of, and was not judged according to earlier court standards. Abene had built up a significant reputation in the hacker sub-culture, for example regularly appearing on the radio show Off the Hook, hosted by Eric Corley (a.k.a. Emmanuel Goldstein), debating and defending the morals and motivations of hackers in public forums and in interviews, and lecturing on the history of telecommunications technology at the night courses of several New York City universities. At the time of the indictment he was working at MindVox, an early BBS/ISP founded by two New York LOD members, and subsequently on EchoNYC, a multi-user BBS and early ISP.
ECHO users, ECHO management themselves and hackers around the nation expected Abene to get off with probation or at most a few months of jail time. Co-defendants and previous offenders charged with "hacking" offenses had received rather lenient punishments, and given his new-found enthusiasm for using his knowledge to constructive ends, the general feeling was optimistic prior to sentencing.
A statement made by U.S. Attorney Otto Obermeier in conjunction with the indication, "The message that ought to be delivered with this indictment is that such conduct will not be tolerated, irrespective of the age of the particular accused or their ostensible purpose," was interpreted by Abene's supporters to mean that MOD was made an example of, to show that the authorities could handle the perceived "hacker threat". During sentencing, Judge Stanton said that "the defendant stands as a symbol here today," and that "hacking crimes constitute a real threat to the expanding information highway", reinforcing the view that a relatively harmless "teacher" was judged as a symbol for all hackers.
Professional life
Abene has spoken on the subject of security in many publications such as The New York Times, The Washington Post, The Wall Street Journal, and Time. He has appeared as a speaker at both hacker and security industry conferences worldwide and frequently visits universities to speak to students about information security.
After some years as a security consultant, he joined forces with former Legion of Doom member Dave Buchwald and a third colleague, Andrew Brown, to create the security consulting firm Crossbar Security. Crossbar provided consulting services for third party companies, during which the principals conducted business in the U.S., Japan, Brazil, and Sweden. As a result of the "dot com" bust Crossbar ultimately went defunct in 2001, largely due to cuts in corporate security spending.
Abene made his acting début as "The Inside Man" in the fiction film Urchin, completed in 2006 and released in the US in February 2007, in which other hacker notables such as Dave Buchwald and Emmanuel Goldstein can also be seen. In 2009, he founded TraceVector, an intrusion detection firm that makes use of supercomputing and data analytics. He currently resides in Silicon Valley.
References
Bibliography
The Rise and Fall of Information Security in the Western World. Speech by Mark Abene, Hack in the Box security conference, Kuala Lumpur, Malaysia, 2007.
CNET Q&A: Mark Abene, from 'Phiber Optik' to security guru.
New York Software Industry Association.
Goldstein, Emmanuel (2001). Freedom Downtime, opening sequence.
Savage, Annaliza (September 1995). Notes from the underground — Phiber Optik goes directly to jail. .net Issue 10.
Quittner, Joshua (January 23, 1995). Hacker Homecoming. TIME.
Dibbell, Julian (January 12, 1994). Prisoner: Phiber Optik Goes Directly to Jail. The Village Voice
Sterling, Bruce (January 1994). The Hacker Crackdown — Law and Disorder on the Electronic Frontier. . From Project Gutenberg.
Goldstein, Emmanuel (November 10, 1993). Interview with Phiber Optik. Off the Hook radio show. (Online archive)
Electronic Frontier Foundation (July 9, 1992). Federal hacking indictments issued against five in New York City. Retrieved September 4, 2004
Newsbytes (July 9, 1992). New York Computer Crime Indictments. Retrieved September 11, 2004.
Grand jury, United States District Court Southern District of New York (1992). Indictment of Julio Fernandez, John Lee, Mark Abene, Elias Ladopoulos, Paul Stira. (Copy from Computer underground Digest, 4:31).
All Circuits are Busy Now: The 1990 AT&T Long Distance Network Collapse.
External links
The History of MOD
modbook1.txt — "The History of MOD: Book One: The Originals"
modbook2.txt — "The History of MOD: Book Two: Creative Mindz"
modbook3.txt — "The Book of MOD: Part Three: A Kick in the Groin"
modbook4.txt — "The Book of MOD: Part Four: End of '90-'1991"
modbook5.txt — "The Book of MOD: Part 5: Who are They And Where Did They Come From? (Summer 1991)"
Phiber Optik Goes to Prison — Article in Wired Magazine by Julian Dibbell
Off the Hook shows (available as MP3 files)
1991-03-13, "Phiber Optik's" first appearance on the show. .
1993-11-03, announcement of Mark Abene's sentence. No recording exists. .
1993-11-10, the first show following the sentencing, Phiber Optik in the studio. .
1994-01-05, last show before Phiber Optik's going to prison. .
1972 births
Computer security specialists
Legion of Doom (hacker group)
Living people
Masters of Deception
Businesspeople from New York City
Phreaking |
11302396 | https://en.wikipedia.org/wiki/Pen%20computing | Pen computing | Pen computing refers to any computer user-interface using a pen or stylus and tablet, over input devices such as a keyboard or a mouse.
Pen computing is also used to refer to the usage of mobile devices such as tablet computers, PDAs and GPS receivers. The term has been used to refer to the usage of any product allowing for mobile communication. An indication of such a device is a stylus or digital pen, generally used to press upon a graphics tablet or touchscreen, as opposed to using a more traditional interface such as a keyboard, keypad, mouse or touchpad.
Historically, pen computing (defined as a computer system employing a user-interface using a pointing device plus handwriting recognition as the primary means for interactive user input) predates the use of a mouse and graphical display by at least two decades, starting with the Stylator and RAND Tablet systems of the 1950s and early 1960s.
General techniques
User interfaces for pen computing can be implemented in several ways. Actual systems generally employ a combination of these techniques.
Pointing/locator input
The tablet and stylus are used as pointing devices, such as to replace a mouse. While a mouse is a relative pointing device (one uses the mouse to "push the cursor around" on a screen), a tablet is an absolute pointing device (one places the stylus where the cursor is to appear).
There are a number of human factors to be considered when actually substituting a stylus and tablet for a mouse. For example, it is much harder to target or tap the same exact position twice with a stylus, so "double-tap" operations with a stylus are harder to perform if the system is expecting "double-click" input from a mouse.
A finger can be used as the stylus on a touch-sensitive tablet surface, such as with a touchscreen.
Handwriting recognition
The tablet and stylus can be used to replace a keyboard, or both a mouse and a keyboard, by using the tablet and stylus in two modes:
Pointing mode: The stylus is used as a pointing device as above.
On-line Handwriting recognition mode: The strokes made with the stylus are analyzed as an "electronic ink" by software which recognizes the shapes of the strokes or marks as handwritten characters. The characters are then input as text, as if from a keyboard.
Different systems switch between the modes (pointing vs. handwriting recognition) by different means, e.g.
by writing in separate areas of the tablet for pointing mode and for handwriting-recognition mode.
by pressing a special button on the side of the stylus to change modes.
by context, such as treating any marks not recognized as text as pointing input.
by recognizing a special gesture mark.
The term "on-line handwriting recognition" is used to distinguish recognition of handwriting using a real-time digitizing tablet for input, as contrasted to "off-line handwriting recognition", which is optical character recognition of static handwritten symbols from paper.
Direct manipulation
The stylus is used to touch, press, and drag on simulated objects directly. The Wang Freestyle system is one example. Freestyle worked entirely by direct manipulation, with the addition of electronic "ink" for adding handwritten notes.
Gesture recognition
This is the technique of recognizing certain special shapes not as handwriting input, but as an indicator of a special command.
For example, a "pig-tail" shape (used often as a proofreader's mark) would indicate a "delete" operation. Depending on the implementation, what is deleted might be the object or text where the mark was made, or the stylus can be used as a pointing device to select what it is that should be deleted. With Apple's Newton OS, text could be deleted by scratching in a zig-zag pattern over it.
Recent systems have used digitizers which can recognize more than one "stylus" (usually a finger) at a time, and make use of Multi-touch gestures.
The PenPoint OS was a special operating system which incorporated gesture recognition and handwriting input at all levels of the operating system. Prior systems which employed gesture recognition only did so within special applications, such as CAD/CAM applications or text processing.
History
Pen computing has very deep historical roots.
For example, the first patent for an electronic device used for handwriting, the telautograph, was granted in 1888.
What is probably the first patent for a system that recognized handwritten characters by analyzing the handwriting motion was granted in 1915.
Around 1954 Douglas T Ross, working on the Whirlwind computer at MIT, wrote the "first hand-drawn graphics input program to a computer".
The first publicly demonstrated system using a tablet and handwriting text recognition instead of a keyboard for working with a modern digital computer dates to 1956.
In addition to many academic and research systems, there were several companies with commercial products in the 1980s: Pencept, Communications Intelligence Corporation, and Linus
were among the best known of a crowded field. Later, GO Corp. brought out the PenPoint OS operating system for a tablet PC product: one of the patents from GO corporation was the subject of recent infringement lawsuit concerning the Tablet PC operating system.
The following timeline list gives some of the highlights of this history:
Before 1950
1888: U.S. Patent granted to Elisha Gray on electrical stylus device for capturing handwriting.
1915: U.S. Patent on handwriting recognition user interface with a stylus.
1942: U.S. Patent on touchscreen for handwriting input.
1945: Vannevar Bush proposes the Memex, a data archiving device including handwriting input, in an essay As We May Think.
1950s
Tom Dimond demonstrates the Styalator electronic tablet with pen for computer input and handwriting recognition.
Early 1960s
RAND Tablet invented.
Late 1960s
Alan Kay of Xerox PARC proposed a notebook using pen input called Dynabook: however, the device is never constructed.
1971
Touchscreen interface developed at SLAC.
1979
Fairlight CMI, one of early commercial digital sampling workstations
1982
Pencept of Waltham, Massachusetts markets a general-purpose computer terminal using a tablet and handwriting recognition instead of a keyboard and mouse.
Cadre System markets the Inforite point-of-sale terminal using handwriting recognition and a small electronic tablet and pen.
1985:
Pencept and CIC both offer PC computers for the consumer market using a tablet and handwriting recognition instead of a keyboard and mouse. Operating system is MS-DOS.
1989
The first commercially available tablet-type portable computer was the GRiDPad 1900 from GRiD Systems, released in September. It ran GRiDPen (later released as PenRight), a graphic system with pen input and handwriting recognition running on MS-DOS.
Wang Laboratories introduces Freestyle. Freestyle was an application that would do a screen capture from an MS-DOS application, and let the user add voice and handwriting annotations. It was a sophisticated predecessor to later note-taking applications for systems like the Tablet PC. The operating system was MS-DOS.
1991
The Momenta Pentop was released.
GO Corp announced a dedicated operating system, called PenPoint OS, featuring control of the operating system desktop via handwritten gesture shapes. Gestures included "flick" gestures in different directions, check-marks, cross-outs, pig-tails, and circular shapes, among others.
Portia Isaacsen of Future Computing estimates the total annual market for pen computers such as those running the PenPoint OS to be on the order of $500 Million.
NCR released model 3125 pen computer running MS-DOS, Penpoint or Pen Windows.
The Apple Newton entered development; although it ultimately became a PDA, its original concept (which called for a larger screen and greater sketching capabilities) resembled that of a tablet PC.
Sam Tramiel of Atari Corp. presented the "ST-Pad" (codenamed "STylus") at the CeBIT '91 in Hanover, Germany. The computer never went into production.
1992
GO Corp shipped PenPoint and IBM announced IBM 2125 pen computer (the first IBM model named "ThinkPad") in April.
Microsoft releases Windows for Pen Computing as a response to the PenPoint OS.
1993
IBM releases the ThinkPad, IBM's first commercialized portable tablet computer product available to the consumer market, as the IBM ThinkPad 750P and 360P
Apple Computer announces the Newton PDA, also known as the Apple MessagePad, which includes handwriting recognition with a stylus.
Amstrad release the "PenPad" or PDA600, a similar pen-based device. It did not achieve commercial success.
AT&T introduced the EO Personal Communicator combining PenPoint with wireless communications.
BellSouth released the IBM Simon Personal Communicator, an analog cellphone using a touch-screen and display. It did not include handwriting recognition, but did permit users to write messages and send them as faxes on the analog cellphone network, and included PDA and Email features.
1999
The "QBE" pen computer created by Aqcess Technologies wins Comdex Best of Show.
2000
The "QBE Vivo" pen computer created by Aqcess Technologies ties for Comdex Best of Show.
2001
Bill Gates of Microsoft demonstrates first public prototype of a Tablet PC (defined by Microsoft as a pen-enabled computer conforming to hardware specifications devised by Microsoft and running a licensed copy of Windows XP Tablet PC Edition) at Comdex.
Wacom introduces the Cintiq pen-based tablet platform for professional artists.
2003
FingerWorks develops the touch technology and touch gestures later used in the Apple iPhone.
2005
LeapFrog Enterprises releases the Fly pentop.
2006
Windows Vista released for general availability. Vista included the functionality of the special Tablet PC edition of Windows XP.
2008
In April 2008, as part of a larger federal court case, the gesture features of the Windows/Tablet PC operating system and hardware were found to infringe on a patent by GO Corp. concerning user interfaces for pen computer operating systems. Microsoft's acquisition of the technology is the subject of a separate lawsuit.
HP releases the second MultiTouch capable tablet: the HP TouchSmart tx2z.
2011
Samsung releases the Galaxy Note tablet which includes a stylus.
2012
Microsoft releases the Microsoft Surface Pro hybrid tablet/laptop with an optional Surface Pen.
2013
Lenovo introduces the ThinkPad Helix hybrid tablet/laptop which includes a Wacom stylus.
2015
Apple releases the Apple Pencil for the iPad Pro which includes pressure sensitivity and angle detection.
See also
Stylus (computing)
Gesture recognition
Handwriting movement analysis
Handwriting recognition
Interactive whiteboard
Laser pointer (e.g. highlighting)
Graffiti Lighting
Light pen
Sketch recognition
Tablet computer
References
External links
The Unknown History of Pen Computing contains a history of pen computing, including touch and gesture technology, from approximately 1917 to 1992.
Annotated bibliography of references to handwriting recognition and pen computing
A number of links to pen computing resources.
Digital Ink, Breakthrough Technology in Tablet PC, Brings the Power of the Pen to the Desktop
Notes on the History of Pen-based Computing (Youtube)
User interface techniques |
28051530 | https://en.wikipedia.org/wiki/Connascence | Connascence | Connascence () is a software quality metric invented by Meilir Page-Jones to allow reasoning about the complexity caused by dependency relationships in object-oriented design much like coupling did for structured design. In software engineering, two components are connascent if a change in one would require the other to be modified in order to maintain the overall correctness of the system. In addition to allowing categorization of dependency relationships, connascence also provides a system for comparing different types of dependency. Such comparisons between potential designs can often hint at ways to improve the quality of the software.
Strength
A form of connascence is considered to be stronger if it is more
likely to require compensating changes in connascent elements. The
stronger the form of connascence, the more difficult and costly it
is to change the elements in the relationship.
Degree
The acceptability of connascence is related to the degree of its
occurrence. Connascence might be acceptable in limited degree but
unacceptable in large degree. For example, a function or method that
takes two arguments is generally considered acceptable. However, it is
usually unacceptable for functions or methods to take ten arguments.
Elements with a high degree of connascence incur greater difficulty,
and cost, of change than elements that have a lower degree.
Locality
Locality matters when analyzing connascence. Stronger forms of
connascence are acceptable if the elements involved are closely
related. For example, many languages use positional arguments when
calling functions or methods. This connascence of position is
acceptable due to the closeness of caller and callee. Passing arguments
to a web service positionally is unacceptable due to the relative
unrelatedness of the parties. The same strength and degree of
connascence will have a higher difficulty and cost of change, the
more distant the involved elements are.
Types
This is a list of some types of connascence ordered approximately from weak to strong forms.
Static connascences
Connascenses are said to be "static" if they can be found by visually examining the code.
Connascence of name (CoN)
Connascence of name is when multiple components must agree on the name of an entity. Method names are an example of this form of connascence: if the name of a method changes, callers of that method must be changed to use the new name.
Connascence of type (CoT)
Connascence of type is when multiple components must agree on the type of an entity. In statically typed languages, the type of method arguments is an example of this form of connascence. If a method changes the type of its argument from an integer to a string, callers of that method must be changed to pass a different argument than before.
Connascence of meaning (CoM) or connascence of convention (CoC)
Connascence of meaning is when multiple components must agree on the meaning of particular values. Returning integers 0 and 1 to represent false and true, respectively, is an example of this form of connascence.
Connascence of position (CoP)
Connascence of position is when multiple components must agree on the order of values. Positional parameters in method calls are an example of this form of connascence. Both caller and callee must agree on the semantics of the first, second, etc. parameters.
Connascence of algorithm (CoA)
Connascence of algorithm is when multiple components must agree on a particular algorithm. Message authentication codes are an example of this form of connascence. Both sides of the exchange must implement exactly the same hashing algorithm or the authentication will fail.
Dynamic connascence
Connascenses are said to be "dynamic" if they can only be discovered at runtime.
Connascence of execution (CoE)
Connascence of execution is when the order of execution of multiple components is important.
Connascence of timing (CoT)
Connascence of timing is when the timing of the execution of multiple components is important.
Connascence of values (CoV)
Connascence of values is when several values must change together.
Connascence of identity (CoI)
Connascence of identity is when multiple components must reference the entity.
Reducing connascence
Reducing connascence will reduce the cost of change for a software
system. One way of reducing connascence is by transforming strong
forms of connascence into weaker forms. For example, a method that
takes several arguments could be changed to use named parameters.
This would change the connascence from CoP to CoN. Reducing the degree and increasing locality of involved elements constitute other ways to
reduce connascence.
References
Grand Unified Theory of Software Design, Jim Weirich
Meilir Page-Jones, Comparing techniques by means of encapsulation and connascence, Communications of the ACM, Volume 35, Issue 9
What Every Programmer Should Know About Object Oriented Design, Meilir Page-Jones, Dorset House Publishing,
Fundamentals of Object-Oriented Design in UML, Meilir Page-Jones, Addison-Wesley Pub Co;
Manuel Riverio; Aug 9, 2018; Connascence: A Look at Object-Oriented Design in Java
Software architecture |
1367386 | https://en.wikipedia.org/wiki/Dagstuhl | Dagstuhl | Dagstuhl is a computer science research center in Germany, located in and named after a district of the town of Wadern, Merzig-Wadern, Saarland.
Location
Following the model of the mathematical center at Oberwolfach, the center is installed in a very remote and relaxed location in the countryside.
The Leibniz Center is located in a historic country house, Schloss Dagstuhl (Dagstuhl Castle), together with modern purpose-built buildings connected by an enclosed footbridge.
The ruins of the 13th-century Dagstuhl Castle are nearby, a short walk up a hill from the Schloss.
History
The Leibniz-Zentrum für Informatik (LZI, Leibniz Center for Informatics) was established at Dagstuhl in 1990. In 1993, the over 200-year-old building received a modern cultivation with other guest rooms, conference rooms and a library. The center is managed as a non-profit organization, and financed by national funds. It receives scientific support by a variety of German and foreign research institutions. Until April 2008 the name of the center was: International Conference and Research Center for Computer Science (German: Internationales Begegnungs- und Forschungszentrum für Informatik (IBFI)). The center was founded by Reinhard Wilhelm, who continued as its director until May 2014, when Raimund Seidel became the director. The list of shareholders includes:
German Informatics Society
Saarland University
Technical University of Kaiserslautern
Karlsruhe Institute of Technology
Technische Universität Darmstadt
University of Stuttgart
University of Trier
Goethe University in Frankfurt
Centrum Wiskunde & Informatica, Netherlands
Institute for Research in Computer Science and Automation, France
Max Planck Society
In 2012, another new building was opened with 7 guest rooms. Since 1 January 2005, the LZI is a member of the Leibniz Association.
Library
Dagstuhl's computer science library has over 50,000 books and other media, among them a full set of Springer-Verlag's Lecture Notes in Computer Science (LNCS) series and electronic access to many computer science journals.
Seminar series
Dagstuhl supports computer science by organizing high ranked seminars on hot topics in informatics. Dagstuhl Seminars, which are established after review and approval by the Scientific Directorate, bring together personally invited scientists from academia and industry from all over the world to discuss their newest ideas and problems. Apart from the Dagstuhl seminars, the center also hosts summer schools, group retreats, and other scientific events, all discussing informatics. Every year about 3,500 scientists stay in Dagstuhl for about 100 seminars, workshops and other scientific events. The number of participants is limited to enable discussion and by the available housing capacity. The stay is full-board; participants are accommodated in the original house or in the modern annex, and have all their meals at the center. Seminars are usually held for a weekly period: participants arrive on Sunday evening and depart on Friday evening or Saturday morning. One or sometimes two seminars are held simultaneously with other small meetings.
The cryptographic technique DP5 (Dagstuhl Privacy Preserving Presence Protocol P) is named after Schloss Dagstuhl.
Publications
As well as publishing proceedings from its own seminars, the Leibniz Center publishes the Leibniz International Proceedings in Informatics (LIPIcs), a series of open access conference proceedings from computer science conferences worldwide. Conferences published in this series include the Symposium on Theoretical Aspects of Computer Science (STACS), held annually in Germany and France, the conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS), held annually in south Asia, the Computational Complexity Conference (CCC), held at a different international venue each year, the Symposium on Computational Geometry (SoCG), the International Colloquium on Automata, Languages and Programming (ICALP), the International Symposium on Mathematical Foundations of Computer Science (MFCS) and the International Conference on Concurrency Theory (CONCUR).
See also
Leibniz Association
FIZ Karlsruhe
Heidelberg Institute for Theoretical Studies
DBLP
References
External links
Official website
Schloss Dagstuhl on LinkedIn
Schloss Dagstuhl on Twitter
Non-profit_organisations_based_in_Germany
1990 establishments in Germany
Leibniz Association
Computer science institutes in Germany
Castles in Saarland
Buildings and structures in Merzig-Wadern |
2014805 | https://en.wikipedia.org/wiki/Organisation%20of%20the%20Government%20of%20Singapore | Organisation of the Government of Singapore | The Government of Singapore consists of several departments, known as ministries and statutory boards in Singapore. Ministries are led by a member of the cabinet and deal with state matters that require direct political oversight. The member of the cabinet heading the ministry is known as the minister, who is supported by a junior minister known as the minister of state in Singapore. The administrative management of the ministry is led by a senior civil servant known as a permanent secretary.
Ministry of Culture, Community and Youth (MCCY)
Committees/Councils
Hindu Advisory Board
Hindu Endowments Board
National Integration Council
Sikh Advisory Board
Departments/Divisions
Arts and Heritage Division
Charities Unit
Community Relations and Engagement Division
Corporate Communications Division
Development and Corporate Administration Division
Human Resource and Organisation Development
Information Technology Division
Internal Audit Division
Legal Unit
National Youth Council
Registry of Co-operative Societies and Mutual Benefit Organisations
Resilience and Engagement Division
Partnerships Project Office
Sports Division
Strategic Planning and Finance Division
Youth Division
Statutory boards
Majlis Ugama Islam Singapura
National Arts Council
National Heritage Board
People's Association
Sport Singapore
Ministry of Defence (MINDEF)
Departments/Divisions
Centre for Strategic Infocomm Technologies
Defence Management Group
Defence Policy Group
Defence Technology Collaboration Office
Defence Cyber Organisation
Future Systems and Technology Directorate
Industry & Resources Policy Office
Internal Audit Department
MINDEF Tele-Services
MINDEF/SAF Manpower Centres
SAF Formations
Republic of Singapore Air Force (RSAF)
Republic of Singapore Navy (RSN)
SAF Military Police Command
Singapore Army
Army Senior Specialist Staff Officers
Centre of Excellence for Soldier Performance
SAFSA
Singapore Armed Forces
The Joint Staff
Foreign Military Liaison Branch
Headquarters Medical Corps
Air Force Headquarters
Navy Headquarters
Singapore Army Headquarters
SAFTI Military Institute Headquarters
Singapore Maritime Crisis Centre
Safety and Systems Review Directorate
Technology Strategy and Policy Office
Military Security Department
Training Schools
Basic Military Training Centre (BMTC)
Officer Cadet School (OCS)
Specialist Cadet School (SCS)
Specialist and Warrant Officer Advanced School
Statutory board
Defence Science and Technology Agency (DSTA)
Ministry of Education (MOE)
Departments/Divisions
Communications and Engagement Group
HR Group
Academy of Singapore Teachers
Curriculum Planning & Development Division 1
Curriculum Planning & Development Division 2
Educational Technology Division
Finance and Procurement Division
Higher Education Group
Infrastructure and Facility Services
Information Technology Division
Planning Division
Research and Management Information Division
Schools Division
Special Education Needs Division
Student Placement and Services Division
Student Development Curriculum Division
Curriculum Policy Office
English Language Institute of Singapore
Internal Audit
Legal Services
Physical Education and Sports Teacher Academy
Singapore Teachers Academy for the Arts
Universities
National University of Singapore
Nanyang Technological University
Singapore Institute of Technology
Singapore Management University
Singapore University of Social Sciences
Singapore University of Technology and Design
Statutory boards
Institute of Technical Education
ISEAS–Yusof Ishak Institute
Nanyang Polytechnic
Ngee Ann Polytechnic
Republic Polytechnic
Science Centre Singapore
Singapore Examinations and Assessment Board
Singapore Polytechnic
SkillsFuture Singapore
Temasek Polytechnic
Ministry of Finance (MOF)
Departments/Divisions
Accountant-General's Department
Corporate Development
Corporate Services Directorate
Economic Programmes Directorate
Fiscal Policy Directorate
Free Trade Agreement
Goods and Services Tax Board of Review
Governance and Investment Directorate
Income Tax Board of Review
Internal Audit Unit
Managing for Excellence Directorate
Singapore Customs
Social and Security Programmes
Street and Building Names Board
Valuation Review Board
Vital.org
Statutory boards
Accounting and Corporate Regulatory Authority
Inland Revenue Authority of Singapore (IRAS)
Singapore Accountancy Commission (SAC)
Tote Board (Tote Board)
Ministry of Foreign Affairs (MFA)
Directorates in Headquarters
Americas Directorate
ASEAN Directorate
Australia, New Zealand and the Pacific Directorate
Consular Directorate
Corporate Affairs Directorate
Europe Directorate
Human Resource Directorate
MFA Diplomatic Academy
Information Management Directorate
Internal Audit Unit
International Economics Directorate
International Organisations Directorate
Middle East, North Africa and Central Asia Directorate
Northeast Asia Directorate
Protocol Directorate
South Asia and Sub-Saharan Africa Directorate
Southeast Asia Directorate
Technical Cooperation Directorate
Overseas missions
Ministry of Health (MOH)
Committees/Councils
Dental Specialist Accreditation Board
Family Physicians Accreditation Board
Optometrists and Opticians Board
Pharmacy Specialist Accreditation Board
Specialist Accreditation Board
Allied Health Professions Council
Departments/Divisions
Agency for Integrated Care
Alexandra Hospital
Changi General Hospital
Institute of Mental Health
KK Women's and Children's Hospital
Khoo Teck Puat Hospital
MOH Office for Healthcare Transformation
National Cancer Centre Singapore
National Centre for Infectious Diseases
National Dental Centre Singapore
National Healthcare Group
National Healthcare Group Polyclinics
National Heart Centre Singapore
National Neuroscience Institute
National Skin Centre
National University Health System
National University Hospital
National University Polyclinics
Ng Teng Fong General Hospital
Singapore Gamma Knife Centre
Singapore General Hospital
Singapore Health Services
Singapore National Eye Centre
SingHealth Polyclinics
Tan Tock Seng Hospital
Woodlands Health Campus
Statutory boards
Health Promotion Board
Health Sciences Authority
Singapore Dental Council
Singapore Medical Council
Singapore Nursing Board
Singapore Pharmacy Council
TCM Practitioners Board
Ministry of Home Affairs (MHA)
Councils
National Council Against Drug Abuse
National Crime Prevention Council
National Fire Prevention and Civil Emergency Preparedness Council
Presidential Council for Religious Harmony
Departments
Central Narcotics Bureau
Home Team Academy
Immigration and Checkpoints Authority
Internal Security Department
Singapore Civil Defence Force
Singapore Police Force
Singapore Prison Service
Divisions
Community Partnership & Communications Group
Finance & Admin Division
Gambling Regulatory Unit
Human Resource Division
Home Team Medical Services Division
International Cooperation and Partnerships Division
Legal Division
Joint Operations Group
Science & Technology Group
Planning & Organisation Division
Policy Development Division
Technology and Logistics Division
Registry of Societies
Research & Statistics Division
Risk Management and Audit Group
Training and Competency Development Division
Office of Chief Psychologist
Statutory boards
Casino Regulatory Authority of Singapore (CRA)
Home Team Science and Technology Agency (HTX)
Singapore Corporation of Rehabilitative Enterprises (SCORE)
Ministry of Communications and Information (MCI)
Departments/Divisions
Audit Unit
Corporate Communications Division
Corporate Development Division
Cyber Security Agency (under PMO)
Digital Readiness & Learning Division
Economic Regulation Division
Group Information Technology Division
Industry Division
Information Operations Centre
Information Planning Office
Information Policy Division
Legal Services
Media Division
Public Communications Division
REACH
Research & Data Division
Security & Resilience Division
Senior Consultants
Strategic Planning Division
Transformation
Statutory boards
Infocomm Media Development Authority
National Library Board
National Archives of Singapore
Personal Data Protection Commission
Ministry of Law (MinLaw)
Committees/Councils
Singapore Academy of Law (SAL)
Departments/Divisions
Appeals Board (Land Acquisition)
Community Legal Services Group
Copyright Tribunals
Corporate Services Divisions
International & Advisory
Legal Policy
Legal Services Regulatory Authority
Policy Divisions
Statutory boards
Intellectual Property Office of Singapore (IPOS)
Singapore Land Authority (SLA)
Land Surveyors Board (LSB)
Ministry of Manpower (MOM)
Departments/Divisions
Corporate Services Group
Foreign Manpower Management Division
Income Security Policy Department
International Manpower Division
Labour Relations and Welfare Division
Manpower Planning and Policy Division
Occupational Safety and Health Division
Organisation Management Department
Work Pass Division
Statutory boards
Central Provident Fund Board
Singapore Labour Foundation
Workforce Singapore
Ministry of National Development (MND)
Committees
Community Improvement Projects Committee
Community Improvement Projects Executive Committee
Councils
Aljunied - Hougang Town Council
Ang Mo Kio Town Council
Bishan - Toa Payoh Town Council
Chua Chu Kang Town Council
East Coast Town Council
Holland - Bukit Panjang Town Council
Jalan Besar Town Council
Jurong - Clementi Town Council
Marine Parade Town Council
Marsiling - Yew Tee Town Council
Nee Soon Town Council
Pasir Ris - Punggol Town Council
Sembawang Town Council
Sengkang Town Council
Tampines Town Council
Tanjong Pagar Town Council
West Coast Town Council
Departments/Divisions
Corporate Development Division
Housing Division
Infrastructure Division
Planning and Research Unit
Strategic Planning Division
Statutory boards
Board of Architects
Building and Construction Authority (BCA)
Council for Estate Agencies (CEA)
Housing and Development Board (HDB)
National Parks Board (NPB)
Professional Engineers Board, Singapore
Strata Titles Boards
Urban Redevelopment Authority (URA)
Ministry of Social and Family Development (MSF)
Departments
Early Childhood Development Agency
Emergency Preparedness Unit
Feedback Unit
Organisational Development Unit
Divisions
Communications and International Relations Division
Community and Social Sector Development Division
Elderly Development Division
Family Development Division
Finance And Facilities Division
Human Resource Division
Information Technology Division
Rehabilitation & Protection Division
Social Support Division
Sports Division
Strategic Policy And Research Division
Youth Division
Statutory boards
National Council of Social Service
Ministry of Sustainability and the Environment (MSE)
Departments/Divisions
Energy & Climate Policy
Environmental Policy
Water & Food Policy
International Policy
Communications & 3P Partnerships Division
Futures & Planning
Corporate Development
Climate Change Negotiation Office
Statutory boards
National Environment Agency (NEA)
Public Utilities Board (PUB)
Singapore Food Agency (SFA)
Ministry of Trade and Industry (MTI)
Departments/Divisions
Capability Development Group
Corporate Development Division
Department of Statistics
Directorate A, Trade Division
Directorate B, Trade Division
Economics Division
Enterprise Division
Industry Division
International Business Development Division
Resource Centre
Resource Division
Service Improvement Unit
Special Project Unit
Statutory boards
Agency for Science, Technology and Research (A*STAR)
Competition and Consumer Commission of Singapore (CCCS)
Economic Development Board (EDB)
DesignSingapore Council
Energy Market Authority (EMA)
Enterprise Singapore (ESG)
Hotels Licensing Board (HLB)
JTC Corporation (JTC)
Sentosa Development Corporation (SDC)
Singapore Tourism Board (STB)
Ministry of Transport (MOT)
Departments/Divisions
Air Transport Division
Corporate Communications Division
Corporate Development Division
Futures and Transformation Division
International Relations and Security Division
Land Transport Division
Sea Transport Division
Transport Safety Investigation Bureau
Technology Office
Statutory boards
Civil Aviation Authority of Singapore (CAAS)
Land Transport Authority (LTA)
Maritime and Port Authority of Singapore (MPA)
Public Transport Council (PTC)
Prime Minister's Office (PMO)
Committees/Councils
Singapore Bicentennial Office
Departments/Divisions
Communications Group
Corrupt Practices Investigation Bureau
Cyber Security Agency (Managed by MCI)
Elections Department
Horticultural Section
Istana Maintenance Unit
Istana Security Unit
Justices of the Peace, Singapore
National Research Foundation
National Security Coordination Secretariat
Public Service Division
Smart Nation and Digital Government Office
Strategy Group
National Climate Change Secretariat (NCCS)
National Population and Talent Division (NPTD)
Public Sector Science and Technology Policy and Plans Office
Statutory Boards
Civil Service College Singapore
Monetary Authority of Singapore
Government Technology Agency
Organs of State
Attorney-General's Chambers (AGC)
Auditor-General's Office (AGO)
The Cabinet (CAB)
Istana (ISTANA)
Judiciary, Industrial Arbitration Court (IAC)
Judiciary, Family Justice Courts (FJCOURTS)
Judiciary, State Courts (STATE COURTS)
Judiciary, Supreme Court (SUPCOURT)
Parliament of Singapore (PH)
Public Service Commission (PSC)
See also
Statutory boards of the Singapore Government
References
External links
Singapore Government Website
Singapore Government Directory
Singapore Whitepages Government Numbers
Government of Singapore
Lists of government agencies |
149353 | https://en.wikipedia.org/wiki/Computational%20biology | Computational biology | Computational biology involves the development and application of data-analytical and theoretical methods, mathematical modelling and computational simulation techniques to the study of biological, ecological, behavioural, and social systems. The field is broadly defined and includes foundations in biology, applied mathematics, statistics, biochemistry, chemistry, biophysics, molecular biology, genetics, genomics, computer science, ecology, and evolution.
Computational biology is different from biological computing, which is a subfield of computer engineering using bioengineering and biology to build computers.
Introduction
Computational biology, which includes many aspects of bioinformatics and much more, is the science of using biological data to develop algorithms or models in order to understand biological systems and relationships.
Until recently, biologists did not have access to very large amounts of data. This data has now become commonplace, particularly in molecular biology and genomics. Researchers were able to develop analytical methods for interpreting biological information, but were unable to share them quickly among colleagues.
Bioinformatics began to develop in the early 1970s. It was considered the science of analyzing informatics processes of various biological systems. At this time, research in artificial intelligence was using network models of the human brain in order to generate new algorithms. This use of biological data to develop other fields pushed biological researchers to revisit the idea of using computers to evaluate and compare large data sets. By 1982, information was being shared among researchers through the use of punch cards. The amount of data being shared began to grow exponentially by the end of the 1980s. This required the development of new computational methods in order to quickly analyze and interpret relevant information.
Since the late 1990s, computational biology has become an important part of developing emerging technologies for the field of biology.
The terms computational biology and evolutionary computation have a similar name, but are not to be confused. Unlike computational biology, evolutionary computation is not concerned with modeling and analyzing biological data. It instead creates algorithms based on the ideas of evolution across species. Sometimes referred to as genetic algorithms, the research of this field can be applied to computational biology. While evolutionary computation is not inherently a part of computational biology, computational evolutionary biology is a subfield of it.
Computational biology has been used to help sequence the human genome, create accurate models of the human brain, and assist in modeling biological systems.
Subfields
Computational anatomy
Computational anatomy is a discipline focusing on the study of anatomical shape and form at the visible or gross anatomical
scale of morphology. It involves the development and application of computational, mathematical and data-analytical methods for modeling and simulation of biological structures. It focuses on the anatomical structures being imaged, rather than the medical imaging devices. Due to the availability of dense 3D measurements via technologies such as magnetic resonance imaging (MRI), computational anatomy has emerged as a subfield of medical imaging and bioengineering for extracting anatomical coordinate systems at the morphome scale in 3D.
The original formulation of computational anatomy is as a generative model of shape and form from exemplars acted upon via transformations. The diffeomorphism group is used to study different coordinate systems via coordinate transformations as generated via the Lagrangian and Eulerian velocities of flow from one anatomical configuration in to another. It relates with shape statistics and morphometrics, with the distinction that diffeomorphisms are used to map coordinate systems, whose study is known as diffeomorphometry.
Computational biomodeling
Computational biomodeling is a field concerned with building computer models of biological systems. Computational biomodeling aims to develop and use visual simulations in order to assess the complexity of biological systems. This is accomplished through the use of specialized algorithms, and visualization software. These models allow for prediction of how systems will react under different environments. This is useful for determining if a system is robust. A robust biological system is one that “maintain their state and functions against external and internal perturbations”, which is essential for a biological system to survive. Computational biomodeling generates a large archive of such data, allowing for analysis from multiple users. While current techniques focus on small biological systems, researchers are working on approaches that will allow for larger networks to be analyzed and modeled. A majority of researchers believe that this will be essential in developing modern medical approaches to creating new drugs and gene therapy.
A useful modelling approach is to use Petri nets via tools such as esyN.
Computational ecology
Computational methods in ecology have seen increasing interest. Until recent decades, theoretical ecology has largely dealt with analytic models that were largely detached from the statistical models used by empirical ecologists. However, computational methods have aided in developing ecological theory via simulation of ecological systems, in addition to increasing application of methods from computational statistics in ecological analyses.
Computational evolutionary biology
Computational biology has assisted the field of evolutionary biology in many capacities. This includes:
Using DNA data to reconstruct the tree of life with computational phylogenetics
Fitting population genetics models (either forward time or backward time) to DNA data to make inferences about demographic or selective history
Building population genetics models of evolutionary systems from first principles in order to predict what is likely to evolve
Computational genomics
Computational genomics is a field within genomics which studies the genomes of cells and organisms. It is sometimes referred to as Computational and Statistical Genetics and encompasses much of Bioinformatics. The Human Genome Project is one example of computational genomics. This project looks to sequence the entire human genome into a set of data. Once fully implemented, this could allow for doctors to analyze the genome of an individual patient. This opens the possibility of personalized medicine, prescribing treatments based on an individual's pre-existing genetic patterns. This project has created many similar programs. Researchers are looking to sequence the genomes of animals, plants, bacteria, and all other types of life.
One of the main ways that genomes are compared is by sequence homology. Homology is the study of biological structures and nucleotide sequences in different organisms that come from a common ancestor. Research suggests that between 80 and 90% of genes in newly sequenced prokaryotic genomes can be identified this way.
This field is still in development. An untouched project in the development of computational genomics is the analysis of intergenic regions. Studies show that roughly 97% of the human genome consists of these regions. Researchers in computational genomics are working on understanding the functions of non-coding regions of the human genome through the development of computational and statistical methods and via large consortia projects such as ENCODE (The Encyclopedia of DNA Elements) and the Roadmap Epigenomics Project.
Computational neuropsychiatry
Computational neuropsychiatry is the emerging field that uses mathematical and computer-assisted modeling of brain mechanisms involved in mental disorders. It was already demonstrated by several initiatives that computational modeling is an important contribution to understand neuronal circuits that could generate mental functions and dysfunctions.
Computational neuroscience
Computational neuroscience is the study of brain function in terms of the information processing properties of the structures that make up the nervous system. It is a subset of the field of neuroscience, and looks to analyze brain data to create practical applications. It looks to model the brain in order to examine specific aspects of the neurological system. Various types of models of the brain include:
Realistic Brain Models: These models look to represent every aspect of the brain, including as much detail at the cellular level as possible. Realistic models provide the most information about the brain, but also have the largest margin for error. More variables in a brain model create the possibility for more error to occur. These models do not account for parts of the cellular structure that scientists do not know about. Realistic brain models are the most computationally heavy and the most expensive to implement.
Simplifying Brain Models: These models look to limit the scope of a model in order to assess a specific physical property of the neurological system. This allows for the intensive computational problems to be solved, and reduces the amount of potential error from a realistic brain model.
It is the work of computational neuroscientists to improve the algorithms and data structures currently used to increase the speed of such calculations.
Computational oncology
Computational oncology, sometimes also called cancer computational biology, is a field that aims to determine the future mutations in cancer through an algorithmic approach to analyzing data. Research in this field has led to the use of high-throughput measurement. High throughput measurement allows for the gathering of millions of data points using robotics and other sensing devices. This data is collected from DNA, RNA, and other biological structures. Areas of focus include determining the characteristics of tumors, analyzing molecules that are deterministic in causing cancer, and understanding how the human genome relates to the causation of tumors and cancer.
Computational pharmacology
Computational pharmacology (from a computational biology perspective) is “the study of the effects of genomic data to find links between specific genotypes and diseases and then screening drug data”. The pharmaceutical industry requires a shift in methods to analyze drug data. Pharmacologists were able to use Microsoft Excel to compare chemical and genomic data related to the effectiveness of drugs. However, the industry has reached what is referred to as the Excel barricade. This arises from the limited number of cells accessible on a spreadsheet. This development led to the need for computational pharmacology. Scientists and researchers develop computational methods to analyze these massive data sets. This allows for an efficient comparison between the notable data points and allows for more accurate drugs to be developed.
Analysts project that if major medications fail due to patents, that computational biology will be necessary to replace current drugs on the market. Doctoral students in computational biology are being encouraged to pursue careers in industry rather than take Post-Doctoral positions. This is a direct result of major pharmaceutical companies needing more qualified analysts of the large data sets required for producing new drugs.
Software and tools
Computational Biologists use a wide range of software. These range from command line programs to graphical and web-based programs.
Open source software
Open source software provides a platform to develop computational biological methods. Specifically, open source means that every person and/or entity can access and benefit from software developed in research. PLOS cites four main reasons for the use of open source software including:
Reproducibility: This allows for researchers to use the exact methods used to calculate the relations between biological data.
Faster Development: developers and researchers do not have to reinvent existing code for minor tasks. Instead they can use pre-existing programs to save time on the development and implementation of larger projects.
Increased quality: Having input from multiple researchers studying the same topic provides a layer of assurance that errors will not be in the code.
Long-term availability: Open source programs are not tied to any businesses or patents. This allows for them to be posted to multiple web pages and ensure that they are available in the future.
Conferences
There are several large conferences that are concerned with computational biology. Some notable examples are Intelligent Systems for Molecular Biology (ISMB), European Conference on Computational Biology (ECCB) and Research in Computational Molecular Biology (RECOMB).
Journals
There are numerous journals dedicated to computational biology. Some notable examples include Journal of Computational Biology and PLOS Computational Biology. The PLOS computational biology journal is a peer-reviewed open access journal that has many notable research projects in the field of computational biology. They provide reviews on software, tutorials for open source software, and display information on upcoming computational biology conferences.
Related fields
Computational biology, bioinformatics and mathematical biology are all interdisciplinary approaches to the life sciences that draw from quantitative disciplines such as mathematics and information science. The NIH describes computational/mathematical biology as the use of computational/mathematical approaches to address theoretical and experimental questions in biology and, by contrast, bioinformatics as the application of information science to understand complex life-sciences data.
Specifically, the NIH defines
While each field is distinct, there may be significant overlap at their interface.
See also
References
External links
bioinformatics.org
Bioinformatics
Computational fields of study |
62683 | https://en.wikipedia.org/wiki/Louisiana%20Tech%20University | Louisiana Tech University | Louisiana Tech University (Louisiana Tech, La. Tech, or simply Tech) is a public research university in Ruston, Louisiana. It is part of the University of Louisiana System and classified among "R2: Doctoral Universities – High research activity".
Louisiana Tech opened as the Industrial Institute and College of Louisiana in 1894 during the Second Industrial Revolution. The original mission of the college was for the education of students in the arts and sciences for the purpose of developing an industrial economy in post-Reconstruction Louisiana. Four years later in 1898, the state constitution changed the school's name to Louisiana Industrial Institute. In 1921, the college changed its name to Louisiana Polytechnic Institute to reflect its development as a larger institute of technology. Louisiana Polytechnic Institute became desegregated in the 1960s. It officially changed its name to Louisiana Tech University in 1970 as it satisfied criteria of a research university.
Louisiana Tech enrolled 12,463 students in five academic colleges during the Fall 2018 academic quarter including 1,282 students in the graduate school. In addition to the main campus in Ruston, Louisiana Tech holds classes at the Louisiana Tech University Shreveport Center, Academic Success Center in Bossier City, Barksdale Air Force Base Instructional Site, and on the CenturyLink campus in Monroe.
Louisiana Tech fields 16 varsity NCAA Division I sports teams (7 men's, 9 women's teams) and is a member of Conference USA of the Football Bowl Subdivision. The university is known for its Bulldogs football team and Lady Techsters women's basketball program which won three national championship titles (1981, 1982, 1988) and made 13 Final Four appearances in the program's history.
History
Early years
Ruston College, a forerunner to Louisiana Tech, was established in the middle 1880s by W. C. Friley, a Southern Baptist pastor. This institution lasted for seven years and had annual enrollments of about 250 students. Friley subsequently from 1892 to 1894 served as the first president of Hardin–Simmons University in Abilene, Texas, and from 1909 to 1910, as the second president of Louisiana College in Pineville.
On May 14, 1894, the Lincoln Parish Police Jury held a special session to outline plans to secure a regional industrial school. The police jury (a body similar to a county court or county commission in other states) called upon State Representative George M. Lomax to introduce the proposed legislation during the upcoming session. Representative Lomax, Jackson Parish Representative J. T. M. Hancock, and journalist, lawyer, and future judge John B. Holstead fought for the passage of the bill. On July 6, 1894, the proposed bill was approved as Act No. 68 of the General Assembly of Louisiana. The act established "The Industrial Institute and College of Louisiana", an industrial institute created for the education of white children in the arts and sciences.
In 1894, Colonel Arthur T. Prescott was elected as the first president of the college. He moved to Ruston and began overseeing the construction of a two-story main building. The brick building housed eight large classrooms, an auditorium, a chemical laboratory, and two offices. A frame building was also built nearby and was used for the instruction of mechanics. The main building was located on a plot of that was donated to the school by Francis P. Stubbs. On September 23, 1895, the school started its first session with six faculty members and 202 students.
In May 1897, Harry Howard became the first graduate. Colonel Prescott awarded him with a Bachelor of Industry degree, but there was no formal commencement. The first formal commencement was held in the Ruston Opera House the following May with ten graduates receiving their diplomas.
Article 256 of the 1898 state constitution changed the school's name to Louisiana Industrial Institute. Two years later, the course of study was reorganized into two years of preparatory work and three years of college level courses. Students who were high school graduates were admitted to the seventh quarter (college level) of study without examination. As years went by, courses changed and admissions requirements tightened. From 1917 to 1925, several curricula were organized according to the junior college standards and were offered leading to the Bachelor of Industry degree. In 1919, the Board of Trustees enlarged the curricula and started granting a standard baccalaureate degree. The first of these was granted on June 15, 1921, a Bachelor of Science in Engineering.
The Constitution adopted June 18, 1921, changed the name of the school in Article XII, Section 9, from Louisiana Industrial Institute to Louisiana Polytechnic Institute, or "Louisiana Tech" for short.
Expansion
The Main Building, also known as Old Main, burned to the ground in 1936, but the columns that marked the entrance remain in place behind Prescott Memorial Library. By June 1936, construction on a new administration building had begun. On completion in January 1937, it was named Leche Hall in honor of then Governor Richard W. Leche of New Orleans. The building was renamed after the death of former university president, J.E. Keeny, and remains the remodeled Keeny Hall.
Louisiana Polytechnic Institute experienced an infrastructure growth spurt in 1939 and 1940. Seven buildings were designed by architect Edward F. Neild and completed at a cost of $2,054,270. These were Aswell Hall (girls' dormitory), Robinson Hall (men's dormitory for juniors and seniors), Tolliver Hall (880-seat dining hall), Bogard Hall (the Engineering Building), the S.J. Wages Power Plant, Reese Agricultural Hall (located on the South Campus Tech Farm), and the Howard Auditorium & Fine Arts Building.
During World War II, Louisiana Polytechnic Institute was one of 131 colleges and universities nationally that took part in the V-12 Navy College Training Program which offered students a path to a Navy commission.
In 1959, four students were awarded the first master's degrees by the institution.
University era
In 1962, Foster Jay Taylor became the 12th President of the Louisiana Polytechnic Institute, having succeeded Ralph L. Ropp. During his twenty-five years at president, Dr. Taylor oversaw the transformation of the former Louisiana Polytechnic Institute into Louisiana Tech University. The university's enrollment grew from about 3,000 students in 1962 to roughly 12,000 students in 1987. The first African-American students at Louisiana Tech, James Earl Potts (a transfer student from the nearby HBCU Grambling State University) and Bertha Bradford-Robinson, were admitted in the spring of 1965.
Most of the modern buildings on the Main Campus were either built or renovated during Taylor's tenure as university president. The main athletic facilities were constructed during the Taylor Era including Joe Aillet Stadium, the Thomas Assembly Center, J.C. Love Field, and the Lady Techster Softball Complex. In addition to the athletic facilities, the 16-story Wyly Tower, Student Bookstore, Nethken Hall (Electrical Engineering building), the University President's House, and the current College of Business Building were built on the Main Campus. In order to house the increasing student body of Louisiana Tech, Dr. Taylor led the construction of Graham, Harper, Kidd, Caruthers, and Neilson residence halls.
Taylor's time as Louisiana Tech president also marked the beginning of Lady Techster athletics. In 1974, Taylor established the Lady Techsters women's basketball program with a $5,000 appropriation. He hired Sonja Hogg, a 28-year-old PE instructor at Ruston High School, as the Lady Techsters' first head coach. Under Coach Hogg and her successor Leon Barmore, the Lady Techsters won three National Championships during the 1980s. In 1980, Dr. Taylor founded the Lady Techster Softball team with Barry Canterbury serving as the team's first head coach. The team made seven straight teams to the NCAA Softball Tournament and three trips to the Women's College World Series during the 1980s.
The first doctorate was awarded in 1971, a Ph.D. in chemical engineering.
In 1992, Louisiana Tech became a "selective admissions" university. This university has increased their admissions criteria four times since 2000 by raising the minimum overall grade point average, composite ACT score, and class ranking.
Louisiana Tech has earned recognition from the Louisiana Board of Regents for its graduation rate and retention rate. According to a report of the Louisiana Board of Regents published in December 2011, Louisiana Tech has the second-highest graduation rate among the fourteen public universities in the state of Louisiana. The 53.3% 6-year graduation rate is the highest in the University of Louisiana System. Louisiana Tech has a 78.64% retention rate among incoming freshmen who stay with the same school after the first year, the top rate in the University of Louisiana System. The average time-to-degree ratio for Tech's graduates is 4.7 years, the fastest in the UL System.
Louisiana Tech became the first in the world to confer a Bachelor of Science degree in nanosystems engineering when Josh Brown earned his degree in May 2007. Continuing its mission as an engineering pioneer, Louisiana Tech also launched the nation's first cyber engineering BS degree in 2012.
, Louisiana Tech has awarded more than 100,900 degrees.
Campus
The campus of Louisiana Tech University is located in Ruston, Louisiana. The major roads that border or intersect the Tech campus are Tech Drive, California Avenue, Alabama Avenue, and Railroad Avenue. Interstate 20 and U.S. Highways 80 and 167 are located within one mile (1.6 km) of the Main Campus. In addition, a set of railroad tracks operated by Kansas City Southern Railway bisects the campus near Railroad Avenue.
The portion of the Main Campus located west of Tech Drive and north of the railroad include all of the university's major athletic facilities except for J.C. Love Field. The land east of Tech Drive and north of the railroad include the Lambright Intramural Center, J.C. Love Field, and the University Park Apartments. Most of the older residence halls are located near California Avenue and along Tech Drive south of the railroad tracks. The older part of the Main Campus is located south of Railroad Avenue. The Enterprise Campus is located on a plot of land east of Homer Street and bordering the oldest part of the Main Campus.
In addition to the Main Campus, Louisiana Tech also has of land located on the South Campus, of farm land west of the Main Campus, of forest land in Winn, Natchitoches, and Union Parishes, of land in Shreveport, a golf course in Lincoln Parish, of land for an arboretum west of the Main Campus, and a Flight Operations Center at Ruston Regional Airport.
Main campus
The Main Campus at Louisiana Tech University originated in 1894 as a plot of land with only two buildings, The Old Main Building and a frame building nearby used by the Department of Mechanics (the forerunner of the College of Engineering and Science). Today, the Main Campus is housed on of land with 86 buildings including 22 apartment buildings for the University Park Apartments on the north part of the campus. Many of the buildings, especially the older buildings, on the Main Campus are built in the Colonial Revival style. Bogard Hall, Howard Auditorium, Keeny Hall, University Hall (formerly the original Prescott Library), Reese Hall, Robinson Hall, and Tolliver Hall are all included on the National Register of Historic Places.
The oldest existing building on Louisiana Tech's campus is the Ropp Center. The Italian-style, wood-frame house was constructed in 1911 and is named after Ralph L. Ropp, Louisiana Tech's President from 1949 to 1962. The Ropp Center served as the home of seven Louisiana Tech Presidents until a new president's house was built in 1972 on the west side of Tech's campus. The Ropp Center was used by the College of Home Economics for thirteen years until the Office of Special Programs moved into the building in 1985. In 2002, a $1 million renovation was completed to transform the Ropp Center into a faculty and staff club that is used for special events and housing for on-campus guests.
The Quadrangle (the Quad) is the focal point of the oldest part of the Main Campus. The Quad is considered to be one of the most peaceful and beautiful locations at Louisiana Tech. Large oak trees and park benches all around the Quad provide students and visitors a quiet place to study and relax. At the center of the Quad is The Lady of the Mist sculpture and fountain, a landmark for students and alumni alike. The buildings surrounding the Quad are Keeny Hall, Howard Auditorium, the Student Center, the Bookstore, the Wyly Tower of Learning, the current Prescott Memorial Library, and the original Prescott Library now known as University Hall.
Another popular location on the Main Campus is Centennial Plaza. In 1994, Centennial Plaza was constructed to commemorate the 100th anniversary of Louisiana Tech's founding. The plaza was funded by a student self-assessed fee and designed specifically for the use and enjoyment of the student body. Centennial Plaza is used for special events throughout the year, such as Christmas in the Plaza, movie events, and student organizational fairs. Centennial Plaza is one of the main gathering points of the students due to the plaza's close proximity to the on-campus restaurants, coffee shops, dining halls, university post office, and offices for Student Life, SGA, and Union Board. At the center of the plaza is the Clock Tower which has the sound and digital capabilities to play the Alma Mater, Fight Song, and any other songs and calls as needed. The Alumni Brick Walkway runs through Centennial Plaza and around the Clock Tower. A large Louisiana Tech seal marks the middle of Centennial Plaza just west of the Clock Tower. Centennial Plaza is enclosed by Tolliver Hall, the Student Center, Howard Auditorium, and Harper Residence Hall.
Louisiana Tech has two main dining halls on Wisteria Drive on the west end of Centennial Plaza. The first dining hall is the Student Center which is home to the cafeteria, a smaller dining hall for eating and socializing, the La Tech Cafe, several small restaurants including Chick-fil-A, and the Tonk. The Student Center is also home to the CEnIT Innovation Lab, several large study areas, and a conference room. One of the three bronze bulldog statues is located on the first floor of the Student Center near the entrance of the Tonk. Students pet the bulldog statue for good luck as they walk by the statue.
The second student center on the Tech campus is Tolliver Hall. Tolliver Hall, named after Tech's first full-time dietitian Irene Tolliver, is located at the west end of Centennial Plaza near the Wisteria Student Center. This two-story building was built in the 1920s as one of three dining halls at Louisiana Tech. The eating area in the second floor remained open until it was shut down in the 1980s. In 2003, nearly $3 million was spent to renovate Tolliver Hall into a modern cyber student center. The second floor now houses a cyber cafe which includes computer stations, a McAlister's Deli restaurant, several smaller restaurants, a large dining area with big-screen televisions, and smaller tables surrounding the floor for dining and studying. The offices of the Louisiana Tech Student Government Association, Union Board, the International Student Office, and multicultural affairs are also housed on the second floor. The first floor is used as the post office for Tech's students, faculty, and administration officials.
In the past decade, Louisiana Tech built new buildings and renovated some of the Main Campus' older buildings. The university erected Davison Hall (home of the university's Professional Aviation program), the Micromanufacturing Building, and the Biomedical Engineering Building on the south end of the Main Campus along Hergot Avenue. Tech tore down the old Hale Hall and constructed a brand-new Hale Hall in the style and design of the predecessor in 2004. On the eastern edge of the campus, the university renovated the building now known as University Hall, redesigned the bookstore interior, and made needed repairs to Keeny Hall and Howard Auditorium. All of the major athletics facilities on the north part of the Main Campus have received major upgrades and renovations in the past five years.
Construction started in early 2011 on a new College of Business building. The facility serves as the centerpiece of the entrepreneurship and business programs of the College of Business. The building features new classrooms, two auditoriums, computer labs, research centers, meeting rooms, and career and student support centers.
Louisiana Tech has announced plans to construct a new College of Engineering and Science building adjacent to Bogard Hall.
The campus also hosts the Idea Place, a science museum; A.E. Phillips Lab School, a K-8 school which is recognized as a "Five Star School" by the Louisiana Department of Education; and the Joe D. Waggonner Center for Bipartisan Politics and Public Policy.
South Campus
South Campus is located southwest of the main campus in Ruston and covers nearly . It is home to the School of Agricultural Science and Forestry, Center for Rural Development, Equine Center, John D. Griffin Horticultural Garden, and Tech Farm. The Tech Farm Salesroom markets dairy, meat, and plant products produced and processed by Tech Farm to the public. Students enrolled in agriculture or forestry programs attend classes in Reese Hall, the agricultural laboratory, and in Lomax Hall, the forestry and plant science complex which is home to the Louisiana Tech Greenhouses, Horticultural Conservatory, and the Spatial Data Laboratory.
Enterprise Campus
In Fall Quarter 2009, the university broke ground on the new Enterprise Campus which will expand the campus by upon completion. The Enterprise Campus will be a green building project and will be a research facility available to technology companies and businesses. The Enterprise campus will also try to bridge the Engineering and Business colleges with the addition of the Entrepreneurship and Innovation Center (EIC).
In 2010, Louisiana Tech finished the renovations of the old Visual Arts Building by transforming that building into the new Entrepreneurship and Innovation (E&I) Center. The E&I Center will serve as the central hub for the Center for Entrepreneurship and Information Technology's (CEnIT) programs and is located between the College of Business building and Bogard Hall (COES).
Louisiana Tech broke ground on Tech Pointe, the first building on the Enterprise Campus, in 2010. Tech Pointe will house the Cyberspace Research Laboratory as well as high-tech companies and start-up technology companies. The facility will include access to the Louisiana Optical Network Initiative (LONI), fiber-optic and Internet networks, advanced computing capabilities, and other information technology supports needed to meet the demands of 24/7 high-tech companies and specialized cyber security research. Tech Pointe is scheduled for completion sometime in 2011.
The university recently unveiled plans to build a new College of Engineering and Science (COES) building. The three-story, building will provide new active learning class labs; engineering shops; and meeting rooms for classes in math, science, and engineering. The new COES building will provide new learning space for the university's first-year and second-year engineering and science students for the first time since the completion of Bogard Hall in 1940. Upon completion of the new College of Engineering and Science building, Louisiana Tech plans to renovate and improve Bogard Hall.
Barksdale Campus
Since September 1965, Louisiana Tech has offered on-base degree programs through its satellite campus at Barksdale Air Force Base in Bossier City, Louisiana. The university works in conjunction with the Department of the Air Force to provide postsecondary education programs that are designed to meet the needs of Air Force personnel. While the primary focus of the Barksdale campus is to educate Air Force personnel, civilians are permitted to take part in the classes offered at the Barksdale campus if space is available. All courses offered at Tech Barksdale are taught on-base or online. The administrative offices for the Louisiana Tech Barksdale Air Force Program are located in the Base Education Center.
Academics
Student body
As of the Fall 2018 quarter, Louisiana Tech had an enrollment of 12,463 students pursuing degrees in five academic colleges. The student body has members from every Louisiana parish, 43 U.S. states, and 64 foreign countries. Louisiana residents account for 85.0% of the student population, while out-of-state students and international students account for 11.1% and 4.0% of the student body, respectively. The student body at Louisiana Tech is 69.4% white, 13.3% black, 3.8% international students, and 13.5% other or "unknown" ethnicity. The student body consists of 50.2% women and 49.8% men.
The Fall 2016 incoming freshmen class at Louisiana Tech consisted of 2,018 students. This incoming freshmen class had an average 24.7 ACT score, with 31% scoring between 27–36 and 45% scoring between 22–26. Of the 2015 freshmen class, 83.0% are Louisiana residents, 16.3% are out-of-state students, and 0.7% are international students. Louisiana Tech's 2015 freshman class includes ten National Merit Scholars and one National Achievement Scholar.
As of Fall 2015, the College of Engineering and Science had the largest enrollment of any college at Louisiana Tech with 22.9% of the student body. The College of Education, College of Liberal Arts, the College of Applied and Natural Sciences, and the College of Business had 18.4%, 14.0%, 13.1%, and 9.5%, respectively. About 22.2% of the student body were enrolled in Basic and Career Studies.
Rankings
In the 2021 U.S. News and World Report ranking of public universities, Louisiana Tech is not ranked, falling in the 298-389 category. Forbes 2019 edition of America's Top Colleges ranked Louisiana Tech as the 132nd best public college in the nation, the 170th best research university in the nation, the 397th best college overall, and the 81st best college in the South. According to Washington Monthly 2019 National University Rankings, which consider research, community service, social mobility, and net price of attendance, Louisiana Tech ranked 317th nationally. The Wall Street Journal/Times Higher Education College Rankings 2019 ranked Louisiana Tech 601–800th in the United States. Times Higher Education World University Rankings 2020 which measure an institution’s performance across teaching, research, knowledge transfer, and international outlook ranked Louisiana Tech 801–1000th in the world. Times Higher Education World University Rankings named Louisiana Tech one of twenty universities in the world that are rising stars and could challenge the elites to become globally renowned by the year 2030.
Money magazine named Louisiana Tech the best college in Louisiana in their 2016 The Best College in Every State publication. In addition, Louisiana Tech ranked 235th in Money's Best Colleges, which ranked schools based on value by assessing educational quality, affordability, and alumni success. Forbes 2019 edition of America's Best Value Colleges ranked Louisiana Tech as the 159th best overall value for all American colleges and universities. In the 2018 Kiplinger's Personal Finance Best College Values rankings, Louisiana Tech ranked No. 1 for all Louisiana public colleges, 65th of all public colleges in the nation, and 189th of all public and private colleges in the United States. In the 2016 U.S. News and World Report Best Colleges rankings, Louisiana Tech ranked No. 1 among public national universities and 6th among all national universities for graduating students with the least amount of debt. Louisiana Tech ranked 6th in Business Insiders 2015 Most Underrated Colleges In America rankings. According to the 2015–2016 PayScale College Salary Report salary potential for all alumni, Louisiana Tech ranks first among all public and private institutions in Louisiana, 60th nationally among public schools, 84th nationally among research universities, and 184th nationally among all universities and colleges.
Several of Louisiana Tech's graduate programs were named to the 2021 U.S. News and World Report list of Best Graduate Schools including the College of Business, Doctor of Audiology, Biomedical Engineering, College of Education, Master of Arts in Speech–Language Pathology, and College of Engineering. In the 2020 U.S. News and World Report Best Colleges rankings, Louisiana Tech's undergraduate engineering program ranked 134th in the nation, and Tech's undergraduate business program ranked 224th. The online Professional MBA was named to the 2020 U.S. News list of Best Online Programs. In the 2019 U.S. News and World Report Best Grad Schools rankings, Louisiana Tech ranked 145th in engineering, 141st in speech–language pathology, and 185th in education.
Colleges
The university confers associate's, bachelor's and master's degrees through its five academic colleges. Additionally, Louisiana Tech offers doctoral degrees in audiology, business administration, counseling psychology (accredited by the American Psychological Association), industrial/organizational psychology, computational analysis and modeling, engineering, and biomedical engineering, with a joint MD-PhD program with the Louisiana State University Health Sciences Center Shreveport.
College of Applied and Natural Sciences
The College of Applied and Natural Sciences is made up of the School of Agricultural Sciences and Forestry, School of Biological Sciences, Department of Health Informatics and Information Management, School of Human Ecology, and Division of Nursing.
College of Business
Louisiana Tech University’s College of Business houses the Department of Economics & Finance, Department of Marketing & Analysis, Department of Management & Sustainable Supply Chain Management, School of Accountancy, and Department of Computer Information Systems. The college offers eight undergraduate degree programs in addition to the Master of Business Administration, Master of Accountancy, and Doctor of Business Administration.
The MBA is offered in several delivery modes including Traditional, Professional (online), Hybrid (with a focus on Information Assurance), and Executive. The Executive MBA is housed in Louisiana Tech’s Bossier City Academic Success Center and is specifically designed for students who already have management experience. Structured to provide minimal disruption to work schedules, students pursuing the Executive MBA meet for classes every other weekend (Friday evenings and all-day on Saturday). The College of Business also offers several certificate programs.
The college has been accredited by AACSB International since 1955, when the School of Business Administration was one of 78 schools of business in the United States to become members of the American Association of Collegiate Schools of Business. The MBA program was initially accredited in 1978, and the School of Accountancy was among the initial 20 schools receiving separate Accounting accreditation and the first in Louisiana.
The college houses the Center for Information Assurance, the Center for Entrepreneurship and Information Technology (CEnIT), the Academy of Marketing Science, and the Center for Economic Research, as well as The DATA BASE for Advances in Information Systems journal. It is also designated by the National Security Agency (NSA) and the Department of Homeland Security (DHS) as a Center of Academic Excellence in Cyber Defense Research and Education.
College of Education
The College of Education traces its mission back to the origins of Louisiana Tech in 1894, where the preparation of teachers was one of the early missions of the institution. In 1970, the School of Education was elevated to the level of College.
Today, the College of Education consists of three separate departments: The Department of Curriculum, Instruction, and Leadership, The Department of Kinesiology, and The Department of Psychology and Behavioral Sciences. Together, the three academic departments award thirty-five different academic degrees ranging from the baccalaureate to the doctoral levels.
Notable subdivisions of the College of Education include A.E. Phillips Laboratory School, the Science and Technology Education Center, the NASA Educator Resource Center, The IDEA Place, and the Professional Development and Research Institute on Blindness.
College of Engineering and Science
The College of Engineering and Science (COES) is the engineering school at Louisiana Tech University. The COES offers thirteen undergraduate degrees including seven engineering degrees, two engineering technology degrees, and four science degrees. The college also offers seven Master of Science degrees and four Doctorate degrees.
The college started as the Department of Mechanics in 1894 with a two-year program in Mechanic Arts. Since its founding, the college expanded its degree program to include chemical engineering, civil engineering, electrical engineering, industrial engineering, and mechanical engineering. The COES began offering one of the first biomedical engineering curriculum programs in the United States in 1972 and the first nanosystems engineering BS degree in 2005. Louisiana Tech launched the nation's first cyber engineering BS degree in 2012.
Bogard Hall is the second and current home of the College of Engineering and Science. Louisiana Tech constructed the building in 1940 and named it after Frank Bogard, the former Dean of Engineering at Louisiana Tech. The college also utilizes Nethken Hall, the Biomedical Engineering Building, the Institute for Micromanufacturing, and parts of Carson-Taylor Hall for the college's activities. In early 2011, Louisiana Tech announced plans to construct a new Integrated Engineering and Science Building adjacent to Bogard Hall. The building will provide new classrooms, shops, and meeting rooms for engineering, science, and math students at Louisiana Tech. When the new engineering building is complete, the university will begin renovations of Bogard Hall.
College of Liberal Arts
The College of Liberal Arts consists of nine academic departments: Architecture, Art, History, Journalism, Literature and Language, Performing Arts, Professional Aviation, Social Science, and Speech. The college offers 26 degree programs, including 19 bachelors, 6 masters, and the doctorate degree in audiology
The College of Liberal Arts hosts the Louisiana Tech University Honors Program. Tech's Air Force Reserve Officer Training Corps (ROTC) Detachment 305 is also part of the College of Liberal Arts.CentersAmerican Foreign Policy Center - Created in 1989, the American Foreign Policy Center at Louisiana Tech University is a joint initiative of the Department of History and Prescott Memorial Library. The Center’s goals are to encourage research in the field of U.S. foreign policy, and to promote public awareness of world affairs. The Center is located on the fourth floor of Prescott Library.
Joe D. Waggonner Center for Bipartisan Politics and Public Policy - The Waggonner Center fosters and promotes active and responsible civic engagement through an interdisciplinary combination of academic research, innovative curricular initiatives, and community outreach. The center brings together faculty from across Louisiana Tech University who take as their point of departure the intersection of American principles, institutions, and public policy. By working across traditional academic disciplines, the Waggonner Center aims to create an unprecedented academic experience that engages faculty, students, and community stakeholders alike.GalleriesThe School of Design at Louisiana Tech University has two gallery spaces available to artists working in all media including: painting, drawing, video, printmaking, installation, sculpture, photography, ceramics, fiber, and digital works. Several calls for entry are open year round. The mission of the galleries at The School of Design at Louisiana Tech University is to contribute to student and community learning through exposure to the work and philosophy of nationally recognized contemporary artists working in the visual arts. The SOD Galleries accept unsolicited submissions on a rolling basis, which are reviewed quarterly by the Gallery Committee.
Interdisciplinary centersCenter for Entrepreneurship and Information Technology (CEnIT)In 2001, Louisiana Tech proposed the creation of the Center for Entrepreneurship and Information Technology (CEnIT), a collaboration between the College of Engineering & Science (COES) and the College of Business (COB). The CEnIT focuses the resources of the two colleges and their related centers in promoting entrepreneurial research, technology transfer, and education. The CEnIT was approved in 2002 by the University of Louisiana System Board of Supervisors and the Louisiana Board of Regents. , the CEnIT is housed in the CEnIT Innovation Lab on the main floor of the Student Center next to The Quad. The center will move to the newly renovated University Hall building located next to the College of Business sometime in 2011.
The Top Dawg Competition was created in 2002 by the Association of Business, Engineering, and Science Entrepreneurs (ABESE), now known as Bulldog Entrepreneurs. The annual competition is hosted by Bulldog Entrepreneurs and in conjunction with the CEnIT, COES, College of Business, and the Technology Business Development Center (TBDC). The competition started as the Top Dawg Business Plan Competition in 2002 and expanded six years later to include the Idea Pitch Competition. Participants in the Top Dawg Competition create teams to develop innovative ideas into real businesses and showcase intellectual properties developed by Louisiana Tech researchers and students. The teams must foster an idea, create a business plan, and compete for cash prizes and resources needed to further develop the team's concept. The total amount of money awarded during each competition to the competing teams has grown since 2002 to $14,500 for the 2011 Competition. In addition to prize money from the COES and College of Business, additional prize money is awarded by Jones Walker, Louisiana Tech's Innovation Enterprise Fund, and the Ruston-Lincoln Parish Chamber of Commerce.
Continuing education and distance learningGlobal_CampusLouisiana Tech established the Global_Campus on September 16, 2008. The campus offers a variety of degree programs, certificate programs, and general education courses. Global_Campus focuses on providing more flexibility and choices to Tech's traditional students and complete online education services to non-traditional students, such as military, international, and dual enrollment students.
, Global_Campus offers over 275 distance learning courses while more courses are in development. Louisiana Tech has six master's degree programs, two bachelor's degree programs, and one associate degree program available via distance learning. In addition to the nine degree programs, Global_Campus offers eight professional development programs.CenturyLink@LaTech'''
In the Fall of 2011, Louisiana Tech and CenturyLink created a partnership called "CenturyLink@LaTech" to meet the workforce development and training needs of CenturyLink. It is designed for CenturyLink employees with general responsibilities and interests in telecommunications engineering, information technology or information systems.
CenturyLink@LaTech offers a Communications Systems Graduate Certificate.
Student life
Activities
Louisiana Tech has over 163 officially recognized student organizations. Students can opt to participate in Student Government, Union Board, The Tech Talk, TechTV, Lagniappe, Greek, religious, honor, service, spirit, intramurals, club sports, pre-professional, and special interest organizations.
The Louisiana Tech University Union Board organizes entertainment activities for Louisiana Tech students throughout the entire school year. About 80 students participate in Union Board each academic school year. The Union Board receives an annual budget of about $210,000 in Student Assessment Fees and uses the money to organize and produce the annual Fall Fling, Talent Show, Spring Fling, Tech the Halls, the Miss Tech Pageant, RusVegas casino night, and other special events.
The Student Government Association (SGA) is the official governing body of the Louisiana Tech University Student Association (the student body) and consists of three branches; the Student Senate, Executive Branch, and the Supreme Court. The organization is responsible for the Welcome Week/Dawg Haul activities, Homecoming Week, the Big Event, short term student loans, voter registration drives for the student body, and other various activities throughout the year.
Louisiana Tech and neighboring Grambling State University operate an ROTC exchange program. Louisiana Tech operates the Air Force ROTC while Grambling operates the Army ROTC, and students from either school may participate in either program.
Since 2006, Louisiana Tech has played host to Summer Leadership School for Air Force Junior Reserve Officer Training Corps cadets from public school systems all over the United States. It is operated by normal USAF retirees, but mostly by college level Cadet Training Officers. These sessions are held towards the end of the month of June for nine days.
MediaThe Tech Talk is Louisiana Tech's official student newspaper since 1926. The Tech Talk is published every Thursday of the regular school year, except for finals week and vacation periods. The award-winning newspaper has been honored in the past few years by the Southeast Journalism Conference (SEJC), Louisiana Press Women, National Federation of Press Women, Louisiana Press Association, and the Society of Professional Journalists. The Tech Talk was named the 10th Best Newspaper in the South in 2010 and the 3rd Best Newspaper in the South in 2011 by the Southeast Journalism Conference.Speak Magazine is Louisiana Tech's student magazine. It has been published quarterly since 2014.
The Lagniappe is Tech's yearbook. The Lagniappe, which literally means "something extra" was first published in 1905 and has been published every year since except for 1906, 1913–1921, 1926, and 1944–1945. The yearbook's annual release date is around the last week of the regular school year in the middle of May. The Lagniappe was recognized in May 2011 as "First Class" by the Associated Collegiate Press and as one of the top 2 percent of high school and collegiate yearbooks by Balfour Publishing's "The Yearbook's Yearbook". Mary May Brown, the recently retired faculty advisor of the Lagniappe for 23 years was named the Collegiate Publications Advisor of the Year by the Louisiana Press Women in 2011.
Louisiana Tech's local radio station is KLPI. The radio station was founded as WLPI-AM in 1966 and originally housed in a rented office on Railroad Avenue in downtown Ruston. By 1974, construction was completed on KLPI-FM, and the radio station began broadcasting at 10 watts. Afterward, WLPI-AM was shut down due to maintenance problems with the station's equipment. Today, KLPI transmits at 4,000 watts of power and is located at the southeast corner of the Student Center at the heart of the Tech campus.
Louisiana TechTV is the official student-run television station at Louisiana Tech since its launch in 2000. TechTV shows newly released movies, TechTV news, personal news clips by the general student body, original programming like Tech Cribs, Tech Play, and informational slides for upcoming campus events.
Residential life
A building program, designed by the joint-venture of Tipton Associates, APAC, and Ashe Broussard Weinzettle Architects, is underway to move from traditional dormitories to apartment-style complexes. The first of these, University Park, opened in 2004 and houses up to 450 students. The second phase, known as University Park 2 (UP2) opened in 2008. The third phase, Park Place, opened in 2009.
While the university is constructing new apartment-style student housing complexes, Louisiana Tech is moving to demolish some of the traditional dormitories. The Kidd Residence Hall on the southern part of the Tech campus was demolished in 2004. The university also demolished the Caruthers and Neilson Residence Halls on the north side of the campus. The planned demolition of Caruthers Hall was postponed in 2005 to allow three hundred evacuees from Hurricane Katrina to stay in the dorm for three months.
Greek life
Louisiana Tech has 21 nationally recognized Greek organizations. Each fraternity and sorority on the Tech campus promotes community services, philanthropy, and university involvement through each organization's own locally and/or nationally designated service project. The local Kappa Delta sorority raised over $10,000 this year from their annual Shamrock 5K & 1 Mile Run to benefit the Methodist Children's Home of Ruston. Since 2002, the Phi Mu sorority has held a golf tournament to benefit the Children's Miracle Network. The Phi Mu Golf Tournament raised $7,000 in 2007 and $10,000 in 2009. Sigma Kappa has held the "Kickin' Grass" kickball tournament to benefit the Alzheimer's Research Foundation since 2009 and raised $2,300 during the 3rd Annual tournament in 2011.
The Greek organizations also participate in other university activities including the Big Event, Homecoming Week activities, the Homecoming Step Show, and Bulldog Football tailgating at Hide-Away Park near Joe Aillet Stadium. The fraternities and sororities participate in Greek Week each year during the spring quarter.
Louisiana Tech's Greek fraternities and sororities are governed by three governing boards. The Interfraternity Council (IFC) governs the ten male fraternities, Panhellenic governs the five female sororities, and the National Pan-Hellenic Council (also known as "the Pan") governs the six multicultural sororities and fraternities.
Athletics
Louisiana Tech's sixteen varsity athletic teams compete in NCAA Division I sports as a member of Conference USA. The university's seven men's teams are known as the Bulldogs, and the nine women's teams are known as the Lady Techsters. The teams wear the university colors of red and blue except for the women's basketball team that wears their signature Columbia blue.
Football
Louisiana Tech's football team played its first season in 1901 and has competed at the NCAA Division I Football Bowl Subdivision (FBS) level from 1975 to 1981 and 1989 to present. In its 115 years of existence, Tech's football program has won three National Championships (1972-National Football Foundation Co-National Champions, 1973-Division II National Champions, 1974-UPI College Division National Champions), played in 11 major college bowl games (7–3–1 overall record), and earned 25 conference titles. Its former players include 50 All-American players including Terry Bradshaw, Fred Dean, Willie Roaf, Matt Stover, Ryan Moats, Josh Scobee, Troy Edwards, Tim Rattay, Luke McCown, Tramon Williams, and Ryan Allen.
The football team competes as a Division I FBS institution in Conference USA. The Bulldogs are coached by head coach Skip Holtz and play their home games at Joe Aillet Stadium on the north end of the Tech campus.
Men's basketball
The Louisiana Tech Bulldogs men's basketball program started in the 1909–10 season under Head Coach Percy S. Prince. The basketball team has won 25 regular season conference titles and 6 conference tournament championships. In addition, the Dunkin' Dawgs have earned 6 NCAA Tournament and 9 NIT appearances. The Bulldog program reached the NCAA or the NIT tournaments nine straight years from 1984 to 1992.
Three Bulldogs have had their numbers retired by Louisiana Tech. These are Lady Techster Head Coach Leon Barmore (#12), Karl Malone (#32), and collegiate All-American player Jackie Moreland (#42). Other notable former Bulldog players include Mike Green, Paul Millsap, Scotty Robertson, P. J. Brown, and Tim Floyd.
The Bulldogs are led by head coach Eric Konkol and play their home games on Karl Malone Court at the Thomas Assembly Center.
Women's basketball
The Lady Techsters women's basketball program was founded in 1974 with Sonja Hogg as its first head coach. The Lady Techsters have won three national championships (1981, 1982, 1988), 20 regular season conference championships, and 16 conference tournament championships. The program has also appeared in eight national championship games, 13 Final Fours, and 27 NCAA Women's Basketball Tournaments including 25 consecutive appearances from 1982 to 2006.
Alumni of the program include WNBA All-Stars Teresa Weatherspoon, Betty Lennox, and Cheryl Ford in addition to Women's Basketball Hall of Fame coaches Leon Barmore, Kurt Budke, Mickie DeMoss, Sonja Hogg, and Kim Mulkey. Three former assistant coaches of the Lady Techsters basketball team have won NCAA National Women's Basketball Championships as head coaches: Leon Barmore (1988 with Louisiana Tech), Kim Mulkey (2005, 2012, and 2019 with Baylor), and Gary Blair (2011 with Texas A&M). Also, former Lady Techsters assistant coach Nell Fortner won the gold medal at the 2000 Sydney Olympics as the head coach for the United States women's national basketball team.
The team played their home games at Memorial Gym on Louisiana Tech's campus from 1974 until 1982 when the Thomas Assembly Center was constructed. The team is coached by former Lady Techster standout Brooke Stoehr and plays its home games at the Thomas Assembly Center.
Traditions
Lady of the Mist
The Lady of the Mist is one of the most recognizable landmarks on the Louisiana Tech Main Campus. The granite sculpture sits in the midst of a fountain in the middle of the quadrangle (The Quad), one of the focal points of the university and part of the older section of the Main Campus. The Lady of the Mist symbolizes "Alma Mater" welcoming new students and bidding farewell to Tech graduates. The statue also symbolizes the hope that Louisiana Tech graduates will fulfill their ambitions and highest callings in life.
The statue and fountain was funded in 1938 by the Women's Panhellenic Association of Ruston, the governing body of the university's sorority groups. The Lady of the Mist was the idea of Art & Architecture faculty member Mary Moffett and Art Department Chair Elizabeth Bethea. The Lady of the Mist was created by Duncan Ferguson and Jules Struppeck and specifically located in the middle of the Quad facing north toward the old north entrance columns of the Tech campus. This was done to welcome everyone to the campus as people looked through the north entrance columns to see the statue's open arms waiting to greet them.
The Lady fell into disrepair in the years after its construction. In 1985, the statue was restored through the efforts of the Student Government Association, Panhellenic, Residence Hall Association, and Association of Women Students. Today, the statue remains a focal point for students and alumni who return to the Tech campus. Incoming freshman commemorate their new beginning by tossing a gold medallion into the fountain.
Alumni brick walkway
The alumni walkway was constructed in 1995 as part of the centennial celebration at Louisiana Tech. The brick path stretches from the corner of Adams Boulevard and Dan Reneau Drive through the heart of Centennial Plaza to the footsteps of Tolliver Hall. The alumni brick walkway then follows Wisteria Street north toward Railroad Avenue. The plan is to extend the alumni brick walkway through the University Park student housing apartments that were built near J.C. Love Field. , the walkway contained 72,000 engraved bricks representing all Louisiana Tech graduates from 1897 up to the year 2000.
Notable people
Louisiana Tech has produced prominent businesspeople across several industries. Louisiana Tech alumnus Nick Akins is currently serving as chief executive officer of Fortune 500 company American Electric Power. Alumnus Glen Post is the former CEO of CenturyLink, and alumnus Michael McCallister is the former CEO of Humana. Edward L. Moyers, former president and CEO of several railroads including MidSouth Rail, Illinois Central Railroad and Southern Pacific Railroad, is a Louisiana Tech graduate. Billionaire businessmen brothers Charles Wyly and Sam Wyly graduated from Louisiana Tech. Founder of Duck Commander and star of A&E's reality television series Duck Dynasty Phil Robertson earned two degrees from Louisiana Tech. Will Wright, designer of some of the best-selling video games of all-time (SimCity, The Sims, and Spore) and co-founder of game development company Maxis, attended Louisiana Tech.
Alumni of Louisiana Tech have also made their mark in the arts, entertainment, and the humanities. Country music superstars Kix Brooks and Trace Adkins are Louisiana Tech alumni along with two-time Grammy Award nominee Wayne Watson. Eddie Gossling, writer and producer for Comedy Central's Tosh.0, attended Louisiana Tech. Alumna Faith Jenkins, winner of the most scholarship money in Miss America pageant history, was the host of the Judge Faith'' television show, and alumna Sharon Brown is a former Miss USA. Louisiana Tech graduate Marc Swayze is known for creating comic book superheroine Mary Marvel and his work on Captain Marvel.
Louisiana Tech graduates have been influential through public service and activism. Former United States Senators James P. Pope and Saxby Chambliss and United States Representatives Newt V. Mills, Joe Waggonner, Jim McCrery, and Rodney Alexander all attended Louisiana Tech. In addition, James P. Pope served as director of the Tennessee Valley Authority. Louisiana Tech alumnus Clint Williamson served as United States Ambassador-at-Large for War Crimes Issues. Many notable military leaders are Louisiana Tech alumni including lieutenant general David Wade, lieutenant general John Spencer Hardy, major general Susan Y. Desjardins, and major general Jack Ramsaur II. Alumna Kim Gandy served as president of the National Organization for Women, and alumnus Jerome Ringo served as chairman of the National Wildlife Federation.
Louisiana Tech athletes have starred in the National Football League, National Basketball Association, and Women's National Basketball Association as well as other professional sports. Three Bulldogs have been inducted into the Pro Football Hall of Fame and College Football Hall of Fame: Four-time Super Bowl champion quarterback Terry Bradshaw, four-time Pro Bowl defensive end Fred Dean, and eleven-time Pro Bowl offensive tackle Willie Roaf. Other notable former Bulldog football players include Leo Sanford, Roger Carr, Pat Tilley, Matt Stover, Troy Edwards, Tim Rattay, Tramon Williams, and Ryan Allen. Legendary Lady Techsters coach Leon Barmore, two-time NBA Most Valuable Player Karl Malone, and Wade Trophy winner Teresa Weatherspoon are Louisiana Tech's three inductees into the Naismith Memorial Basketball Hall of Fame. Other notable former Bulldog basketball players include former NBA head coaches Scotty Robertson and Tim Floyd, ABA All-Star Mike Green, NBA champion P. J. Brown, and four-time NBA All-Star Paul Millsap. The Women's Basketball Hall of Fame has inducted seven Louisiana Tech alumni including Leon Barmore, Janice Lawrence Braxton, Mickie DeMoss, Sonja Hogg, Pam Kelly, Kim Mulkey, and Teresa Weatherspoon. Other notable former Lady Techsters include Olympic gold medalist Venus Lacy, two-time WNBA All-Star Vickie Johnson, WNBA Finals Most Valuable Player Betty Lennox, and WNBA Rookie of the Year Cheryl Ford.
References
External links
Louisiana Tech Athletics website
Technological universities in the United States
Education in Lincoln Parish, Louisiana
Ruston, Louisiana
Universities and colleges accredited by the Southern Association of Colleges and Schools
1894 establishments in Louisiana
Educational institutions established in 1894
Buildings and structures in Lincoln Parish, Louisiana
Tourist attractions in Lincoln Parish, Louisiana
Universities and colleges in Ark-La-Tex
Public universities and colleges in Louisiana |
45557863 | https://en.wikipedia.org/wiki/Hillol%20Kargupta | Hillol Kargupta | Hillol Kargupta is an academic, scientist, and entrepreneur.
He is a co-founder and President of Agnik, a data analytics company for connected cars and Internet of Things. He also serves as the chairman of the board for KD2U, an organization for promoting research, education, and practice of data analytics in distributed and mobile environments. He was a professor of computer science at the University of Maryland, Baltimore County from until July, 2014.
Kargupta received his PhD. in Computer Science from University of Illinois at Urbana-Champaign, USA in 1996. Kargupta received his master's degree (M. Tech.) from Indian Institute of Technology Kanpur, India and undergraduate degree (B.Tech) from Regional Engineering College Calicut, India. After finishing his PhD in 1995, Kargupta joined the Los Alamos National Laboratory as a post-doctoral researcher and then as a full technical staff member. He joined the Electrical Engineering and Computer Science Department of Washington State University in 1997 as an assistant professor. In 2001 Kargupta joined the Computer Science and Electrical Engineering Department of the University of Maryland at Baltimore County (UMBC). He spent 13 years at the UMBC and became a full professor in 2009. In 2008, he also founded the Society for Knowledge Discovery in Distributed and Ubiquitous (KD2U) Environments. He currently serves as the President of Agnik.
Awards
IEEE 10-Year Highest Impact Paper award.
SIAM (Society of Industrial and Applied Mathematics) annual best student paper award, 1996.
References
External links
Agnik mines data from vehicles
UBI Going Mainstream?
Kargupta talk at the ACM SIGKDD Conference
Halmstad Colloquium - Big Data Analytics for Connected Cars
Living people
1967 births
People from Darjeeling
Indian computer scientists
National Institute of Technology Calicut |
3697414 | https://en.wikipedia.org/wiki/Philosophical%20anthropology | Philosophical anthropology | Philosophical anthropology, sometimes called anthropological philosophy, is a discipline dealing with questions of metaphysics and phenomenology of the human person.
History
Ancient Christian writers: Augustine of Hippo
Augustine of Hippo was one of the first Christian ancient Latin authors with a very clear anthropological vision, although it is not clear if he had any influence on Max Scheler, the founder of philosophical anthropology as an independent discipline, nor on any of the major philosophers that followed him. Augustine has been cited by Husserl and Heidegger as one of the early writers to inquire on time-consciousness and the role of seeing in the feeling of "Being-in-the-world".
Augustine saw the human being as a perfect unity of two substances: soul and body. He was much closer in this anthropological view to Aristotle than to Plato. In his late treatise On Care to Be Had for the Dead sec. 5 (420 AD) he insisted that the body is essential part of the human person:
Augustine's favourite figure to describe body-soul unity is marriage: caro tua, coniux tua – your body is your wife. Initially, the two elements were in perfect harmony. After the fall of humanity they are now experiencing dramatic combat between one another.
They are two categorically different things: the body is a three-dimensional object composed of the four elements, whereas the soul has no spatial dimensions. Soul is a kind of substance, participating in reason, fit for ruling the body. Augustine was not preoccupied, as Plato and Descartes were, with going too much into detail in his efforts to explain the metaphysics of the soul-body union. It sufficed for him to admit that they were metaphysically distinct. To be a human is to be a composite of soul and body, and that the soul is superior to the body. The latter statement is grounded in his hierarchical classification of things into those that merely exist, those that exist and live, and those that exist, live, and have intelligence or reason.
According to N. Blasquez, Augustine's dualism of substances of the body and soul doesn't stop him from seeing the unity of body and soul as a substance itself. Following Aristotle and other ancient philosophers, he defined man as a rational mortal animal – animal rationale mortale.
Modern period
Philosophical anthropology as a kind of thought, before it was founded as a distinct philosophical discipline in the 1920s, emerged as post-medieval thought striving for emancipation from Christian religion and Aristotelic tradition. The origin of this liberation, characteristic of modernity, has been the Cartesian skepticism formulated by Descartes in the first two of his Meditations on First Philosophy (1641).
Immanuel Kant (1724–1804) taught the first lectures on anthropology in the European academic world. He specifically developed a conception of pragmatic anthropology, according to which the human being is studied as a free agent. At the same time, he conceived of his anthropology as an empirical, not a strictly philosophical discipline. Both his philosophical and his anthropological work has been one of the influences in the field during the 19th and 20th century. After Kant, Ludwig Feuerbach is sometimes considered the next most important influence and founder of anthropological philosophy.
During the 19th century, an important contribution came from post-Kantian German idealists like Fichte, Schelling and Hegel, as well from Søren Kierkegaard.
Philosophical anthropology as independent discipline
Since its development in the 1920s, in the milieu of Germany Weimar culture, philosophical anthropology has been turned into a philosophical discipline, competing with the other traditional sub-disciplines of philosophy such as epistemology, ethics, metaphysics, logic, and aesthetics. It is the attempt to unify disparate ways of understanding behaviour of humans as both creatures of their social environments and creators of their own values. Although the majority of philosophers throughout the history of philosophy can be said to have a distinctive "anthropology" that undergirds their thought, philosophical anthropology itself, as a specific discipline in philosophy, arose within the later modern period as an outgrowth from developing methods in philosophy, such as phenomenology and existentialism. The former, which draws its energy from methodical reflection on human experience (first person perspective) as from the philosopher's own personal experience, naturally aided the emergence of philosophical explorations of human nature and the human condition.
1920s Germany
Max Scheler, from 1900 until 1920 had been a follower of Husserl's phenomenology, the hegemonic form of philosophy in Germany at the time. Scheler sought to apply Husserl's phenomenological approach to different topics. From 1920 Scheler laid the foundation for philosophical anthropology as a philosophical discipline, competing with phenomenology and other philosophic disciplines. Husserl and Martin Heidegger (1889–1976), were the two most authoritative philosophers in Germany at the time, and their criticism to philosophical anthropology and Scheler have had a major impact on the discipline.
Scheler defined the human being not so much as a "rational animal" (as has traditionally been the case since Aristotle) but essentially as a "loving being". He breaks down the traditional hylomorphic conception of the human person, and describes the personal being with a tripartite structure of lived body, soul, and spirit. Love and hatred are not psychological emotions, but spiritual, intentional acts of the person, which he categorises as "intentional feelings." Scheler based his philosophical anthropology in a Christian metaphysics of the spirit. Helmuth Plessner would later emancipate philosophical anthropology from Christianity.
Helmuth Plessner and Arnold Gehlen have been influenced by Scheler, and they are the three major representatives of philosophical anthropology as a movement.
From the 1940s
Ernst Cassirer, a neo-Kantian philosopher, was the most influential source for the definition and development of the field from the 1940s until the 1960s. Particularly influential has been Cassirer's description of man as a symbolic animal, which has been reprised in the 1960s by Gilbert Durand, scholar of symbolic anthropology and the imaginary.
In 1953, future pope Karol Wojtyla based his dissertation thesis on Max Scheler, limiting himself to the works Scheler wrote before rejecting Catholicism and the Judeo-Christian tradition in 1920. Wojtyla used Scheler as an example that phenomenology could be reconciled with Catholicism. Some authors have argued that Wojtyla influenced philosophical anthropology.
In the 20th century, other important contributors and influences to philosophical anthropology were Paul Häberlin (1878–1960), Martin Buber (1878–1965), E.R. Dodds (1893–1979), Hans-Georg Gadamer (1900–2002), Eric Voegelin (1901–85), Hans Jonas (1903–93), Josef Pieper (1904–97), Hans-Eduard Hengstenberg (1904–98), Jean-Paul Sartre (1905–80), Joseph Maréchal (1878–1944), Maurice Merleau-Ponty (1908–61), Paul Ricoeur (1913–2005), René Girard (1923–2015), Alasdair MacIntyre (1929–), Pierre Bourdieu (1930–2002), Hans Blumenberg, Jacques Derrida (1930–2004), Emerich Coreth (1919–2006), Leonardo Polo (1926–2013), and, importantly, P. M. S. Hacker (1939- ).
Anthropology of interpersonal relationships
A large focus of philosophical anthropology is also interpersonal relationships, as an attempt to unify disparate ways of understanding the behaviour of humans as both creatures of their social environments and creators of their own values. It analyses also the ontology that is in play in human relationships – of which intersubjectivity is a major theme. Intersubjectivity is the study of how two individuals, subjects, whose experiences and interpretations of the world are radically different understand and relate to each other.
Recently anthropology has begun to shift towards studies of intersubjectivity and other existential/phenomenological themes. Studies of language have also gained new prominence in philosophy and sociology due to language's close ties with the question of intersubjectivity.
Michael D. Jackson's study of intersubjectivity
The academic Michael D. Jackson is another important philosophical anthropologist. His research and fieldwork concentrate on existential themes of "being in the world" (Dasein) as well as interpersonal relationships. His methodology challenges traditional anthropology due to its focus on first-person experience. In his most well known book, Minima Ethnographica which focuses on intersubjectivity and interpersonal relationships, he draws upon his ethnographic fieldwork in order to explore existential theory.
In his latest book, Existential Anthropology, he explores the notion of control, stating that humans anthropomorphize inanimate objects around them in order to enter into an interpersonal relationship with them. In this way humans are able to feel as if they have control over situations that they cannot control because rather than treating the object as an object, they treat it as if it is a rational being capable of understanding their feelings and language. Good examples are prayer to gods to alleviate drought or to help a sick person or cursing at a computer that has ceased to function.
P. M. S. Hacker's Tetraology on Human Nature
A foremost Wittgensteinian, P. M. S. Hacker has recently completed a tetralogy in philosophical anthropology: “The first was Human Nature: The Categorical Framework (2007), which provided the stage set. The second was The Intellectual Powers: A Study of Human Nature (2013), which began the play with the presentation of the intellect and its courtiers. The third The Passions: A Study of Human Nature (2017), which introduced the drama of the passions and the emotions. The fourth and final volume, The Moral Powers: A Study of Human Nature (2020), turns to the moral powers and the will, to good and evil, to pleasure and happiness, to what gives meaning to our lives, and the place of death in our lives.
This tetralogy constitutes a Summa Anthropologica in as much as it presents a systematic categorical overview of our thought and talk of human nature, ranging from substance, power, and causation to good and evil and the meaning of life. A sine qua non of any philosophical investigation, according to Grice, is a synopsis of the relevant logico-linguistic grammar. It is surely unreasonable that each generation should have to amass afresh these grammatical norms of conceptual exclusion, implication, compatibility, and contextual presupposition, as well as tense and person anomalies and asymmetries. So via the tetralogy I have attempted to provide a compendium of usage of the pertinent categories in philosophical anthropology to assist others in their travels through these landscapes.”
See also
List of important publications in anthropology
Antihumanism (opposite)
Ernst Tugendhat (2007) Anthropologie statt Metaphysik
Introduction to Kant's Anthropology
Martin Buber
Philosophical Anthropology Info – names, books
Notes
References
Bibliography
Blasquez, N, El concepto del substantia segun san Agustin, "Augustinus" 14 (1969), pp. 305–350; 15 (1970), pp. 369–383; 16 (1971), pp. 69–79.
Cassirer, Ernst (1944) An Essay on Man
Couturier Charles SJ, (1954) La structure métaphysique de l'homme d'après saint Augustin, in: Augustinus Magister, Congrès International Augustinien. Communications, Paris, vol. 1, pp. 543–550
Donceel, Joseph F., Philosophical Anthropology, New York: Sheed&Ward 1967.
Gilson, Étienne, (1955) History of Christian Philosophy in the Middle Ages, (2nd ed., reprinted 1985), London: Sheed & Ward, pp. 829, .
Fischer, Joachim (2006) Der Identitätskern der Philosophischen Anthropologie (Scheler, Plessner, Gehlen) in Krüger, Hans-Peter and Lindemann, Gesa (2006) Philosophische Anthropologie im 21. Jahrhundert
Fikentscher, Wolfgang (2004) Modes of thought: a study in the anthropology of law and religion
Gianni, A., (1965) Il problema antropologico, Roma .
Hendrics, E. (1954) Platonisches und Biblisches Denken bei Augustinus, in: Augustinus Magister, Congrès International Augustinien. Communications, Paris, vol. 1.
Lucas Lucas, Ramon, Man Incarnate Spirit, a Philosophy of Man Compendium, USA: Circle Press, 2005.
Mann, W.E., Inner-Life Ethics, in:
Masutti, Egidio, (1989), Il problema del corpo in San Agostino, Roma: Borla, p. 230,
Mondin, Battista, Philosophical Anthropology, Man: an Impossible Project?, Rome: Urbaniana University Press, 1991.
Thomas Sturm, Kant und die Wissenschaften vom Menschen. Paderborn: Mentis, 2009. , 9783897856080
Jesús Padilla Gálvez, Philosophical Anthropology. Wittgenstein’s Perspective. Berlin, De Gruyter, 2010. Review
Further reading
Joseph Agassi, Towards a Rational Philosophical Anthropology. The Hague, 1977.
Anicius Manlius Severinus Boethius, The Consolation of Philosophy, Chicago: The Great Books foundation 1959.
Martin Buber, I and Thou, New York: Scribners 1970.
Martin Buber, The Knowledge of Man: A Philosophy of the Interhuman, New York: Harper&Row 1965.
Martin Buber, Between Man and Man, New York: Macmillan 1965.
Albert Camus, The Rebel: An Essay on Man in Revolt, New York: Vintage Books 1956.
Charles Darwin, The Origin of Species by Means of Natural Selection, Chicago – London: Encyclopædia Britannica 1952.
Teilhard de Chardin, The Phenomenon of Man, New York: Harper&Row 1965
Jacques Derrida, l'Ecriture et la Difference
Joachim Fischer, Philosophische Anthropologie. Eine Denkrichtung des 20. Jahrhunderts. Freiburg, 2008.
Sigmund Freud, Three Essays on the Theory of Sexuality, New York: Basic Books 1975.
Erich Fromm, To Have or To Be, New York: Harper&Row 1976.
David Hume, A Treatise of Human Nature
Hans Jonas, The Phenomenon of Life. Chicago, 1966.
Søren Kierkegaard, The Sickness unto Death. 1848.
Hans Köchler, Der innere Bezug von Anthropologie und Ontologie. Das Problem der Anthropologie im Denken Martin Heideggers. Hain: Meisenheim a.G., 1974.
Hans Köchler, "The Relation between Man and World. A Transcendental-anthropological Problem," in: Analecta Husserliana, Vol. 14 (1983), pp. 181–186.
Stanislaw Kowalczyk, An Outline of the Philosophical Anthropology. Frankfurt a.M. etc., 1991.
Michael Jackson, Minima Ethnographica and Existential Anthropology
Michael Landmann, Philosophische Anthropologie. Menschliche Selbstdeutung in Geschichte und Gegenwart. Berlin, 3rd ed., 1969.
Claude Lévi-Strauss, Anthropologie structurale. Paris, 1958.
John Locke, An Essay Concerning Human Understanding, New York: Dover Publication 1959 (vol. I-II).
Bernard Lonergan, Insight: A Study on Human Understanding, New York-London: Philosophical Library-Longmans 1958.
Alasdair MacIntyre, Dependent Rational Animals. 1999.
Gabriel Marcel, Homo Viator: Introduction to a Metaphysics of Hope, London: Harper&Row, 1962.
Gabriel Marcel, Problematic Man, New York: Herder and Herder 1967.
Maurice Merleau-Ponty, La Phenomenologie de la Perception
Herbert Marcuse, One Dimensional Man, Boston: Beacon Press 1966.
Jacques Maritain, Existence and Existent: An Essay on Christian Existentialism, Garden City: Image Books 1957.
Gerhard Medicus, Being Human – Bridging the Gap between the Sciences of Body and Mind. Berlin: VWB 2015, .
Maurice Nédoncelle, Love and the Person, New York: Sheed & Ward 1966.
Josef Pieper, Happiness and Contemplation. New York:Pantheon, 1958.
Josef Pieper, "Josef Pieper: An Anthology. San Francisco:Ignatius Press, 1989.
Josef Pieper, Death and Immortality. New York:Herder & Herder, 1969.
Josef Pieper, "Faith, Hope, Love". Ignatius Press; New edition, 1997.
Josef Pieper, The Four Cardinal Virtues: Prudence, Justice, Fortitude, Temperance. Notre Dame, Ind., 1966.
Leonardo Polo, Antropología Trascendental: la persona humana. 1999.
Leonardo Polo, Antropología Trascendental: la esencia de la persona humana. 2003.
Karl Rahner, Spirit in the World, New York: Herder and Herder, 1968.
Karl Rahner, Hearer of the Word Karl Rahner, Hominisation: The Evolutionary Origin of Man as a Theological Problem, New York: Herder and Herder 1965.
Marc Rölli, Anthropologie dekolonisieren, Frankfurt, New York: Campus 2021.
Paul Ricoeur, Soi-meme comme un autre Paul Ricoeur, Fallible Man: Philosophy of Will, Chicago: Henry Regnery Company 1967.
Paul Ricoeur, Freedom and Nature: The Voluntary and Involuntary, Evanston: Northwestern University Press 1966.
Jean-Paul Sartre, Being and Nothingness: An Essay in Phenomenological Ontology, New York: The Citadel Press 1956.
Jean-Paul Sartre, Existentialism and Humanism, New York: Haskell House Publisher 1948.
Jean-Paul Sartre, Nausea, New York: New Directions 1959.
Martti Olavi Siirala, Medicine in Metamorphosis Routledge 2003.
Baruch Spinoza, Ethics, Indianapolis: Hackett 1998.
Eric Voegelin, Anamnesis.
Karol Wojtyla, The Acting Person, Dordrecht-Boston: Reidel Publishing Company 1979.
Karol Wojtyla, Love and Responsibility'', London-Glasgow: Collins, 1981.
Philosophical
Anthropology |
29987485 | https://en.wikipedia.org/wiki/List%20of%20material%20published%20by%20WikiLeaks | List of material published by WikiLeaks | Since 2006, the document archive website WikiLeaks has published anonymous submissions of documents that are typically unavailable to the general public.
2006–2008
Apparent Somali assassination order
WikiLeaks posted its first document in December 2006, a decision to assassinate government officials, signed by Sheikh Hassan Dahir Aweys. The New Yorker has reported that
Daniel arap Moi family corruption
On 31 August 2007, The Guardian featured on its front page a story about corruption by the family of the former Kenyan leader Daniel arap Moi. The newspaper stated that the source of the information was WikiLeaks.
Northern Rock Bank
In 2007, the bank Northern Rock suffered a crisis and was propped up by an emergency loan by the Bank of England. During the crisis, a judge banned the media from publishing a sales prospectus which Northern Rock had issued. Wikileaks hosted a copy of the prospectus and letters from lawyers Schillings warning against the publication of the prospectus.
Bank Julius Baer lawsuit
In February 2008, the wikileaks.org domain name was taken offline after the Swiss Bank Julius Baer sued WikiLeaks and the wikileaks.org domain registrar, Dynadot, in a court in California, United States, and obtained a permanent injunction ordering the shutdown. WikiLeaks had hosted allegations of illegal activities at the bank's Cayman Islands branch. WikiLeaks' U.S. Registrar, Dynadot, complied with the order by removing its DNS entries. However, the website remained accessible via its numeric IP address, and online activists immediately mirrored WikiLeaks at dozens of alternative websites worldwide.
The American Civil Liberties Union and the Electronic Frontier Foundation filed a motion protesting the censorship of WikiLeaks. The Reporters Committee for Freedom of the Press assembled a coalition of media and press that filed an amicus curiae brief on WikiLeaks' behalf. The coalition included major U.S. newspaper publishers and press organisations, such as the American Society of News Editors, the Associated Press, the Citizen Media Law Project, the E. W. Scripps Company, the Gannett Company, the Hearst Corporation, the Los Angeles Times, the National Newspaper Publishers Association, the Newspaper Association of America and the Society of Professional Journalists. The coalition requested to be heard as a friend of the court to call attention to relevant points of law that it believed the court had overlooked (on the grounds that WikiLeaks had not appeared in court to defend itself, and that no First Amendment issues had yet been raised before the court). Amongst other things, the coalition argued that:
WikiLeaks provides a forum for dissidents and whistleblowers across the globe to post documents, but the Dynadot injunction imposes a prior restraint that drastically curtails access to Wikileaks from the Internet based on a limited number of postings challenged by Plaintiffs. The Dynadot injunction therefore violates the bedrock principle that an injunction cannot enjoin all communication by a publisher or other speaker.
The same judge, Jeffrey White, who issued the injunction vacated it on 29 February 2008, citing First Amendment concerns and questions about legal jurisdiction. WikiLeaks was thus able to bring its site online again. The bank dropped the case on 5 March 2008. The judge also denied the bank's request for an order prohibiting the website's publication.
The executive director of the Reporters Committee for Freedom of the Press, Lucy Dalglish, commented:
It's not very often a federal judge does a 180 degree turn in a case and dissolves an order. But we're very pleased the judge recognized the constitutional implications in this prior restraint.
Guantanamo Bay procedures
A copy of Standard Operating Procedures for Camp Delta–the protocol of the U.S. Army at the Guantanamo Bay detention camp–dated March 2003 was released on the WikiLeaks website on 7 November 2007. The document, named "gitmo-sop.pdf", is also mirrored at The Guardian. Its release revealed some of the restrictions placed over detainees at the camp, including the designation of some prisoners as off-limits to the International Committee of the Red Cross, something that the U.S. military had in the past repeatedly denied. It also showed that military dogs are used to intimidate prisoners, that children as young as 15 are held at Guantanamo and that new prisoners are held in isolation for two weeks to make them more pliable. The Guantánamo Bay Manual included procedures for transferring prisoners and methods of evading protocols of the Geneva convention.
On 3 December 2007, WikiLeaks released a copy of the 2004 edition of the manual, together with a detailed analysis of the changes.
Tibetan dissent in China
On 24 March 2008, WikiLeaks made 35 uncensored videos of civil unrest in Tibet available for viewing, to get around official Chinese censorship during the worst of the unrest.
Scientology
On 24 March 2008, WikiLeaks published what they referred to as "the collected secret 'bibles' of Scientology". On 7 April 2008, they reported receiving a letter (dated 27 March) from the Religious Technology Center claiming ownership of the several documents pertaining to OT Levels within the Church of Scientology. These same documents were at the center of a 1994 scandal. The email stated:
The letter continued on to request the release of the logs of the uploader, which would remove their anonymity. WikiLeaks responded with a statement released on Wikinews stating: "in response to the attempted suppression, WikiLeaks will release several thousand additional pages of Scientology material next week", and did so.
Sarah Palin's Yahoo! email account contents
In September 2008, during the 2008 United States presidential election campaigns, the contents of a Yahoo! account belonging to Sarah Palin (the running mate of Republican presidential nominee John McCain) were posted on WikiLeaks after being hacked into by members of Anonymous. It has been alleged by Wired that contents of the mailbox indicate that she used the private Yahoo! account to send work-related messages, in violation of public record laws. The hacking of the account was widely reported in mainstream news outlets. Although WikiLeaks was able to conceal the hacker's identity, the source of the Palin emails was eventually publicly identified as David Kernell, a 20-year-old economics student at the University of Tennessee and the son of Democratic Tennessee State Representative Mike Kernell from Memphis, whose email address (as listed on various social networking sites) was linked to the hacker's identity on Anonymous. Kernell attempted to conceal his identity by using the anonymous proxy service ctunnel.com, but, because of the illegal nature of the access, ctunnel website administrator Gabriel Ramuglia assisted the FBI in tracking down the source of the hack.
Killings by the Kenyan police
WikiLeaks publicised reports on extrajudicial executions by Kenyan police for one week starting 1 November 2008 on its home page. Two of the human rights investigators involved, Oscar Kamau Kingara and John Paul Oulu, who made major contributions to a Kenya National Commission on Human Rights (KNCHR) report that was redistributed by WikiLeaks, The Cry of Blood – Report on Extra-Judicial Killings and Disappearances, were assassinated several months later, on 5 March 2009. WikiLeaks called for information on the assassination. In 2009, Amnesty International UK gave WikiLeaks and Julian Assange an award for the distribution of the KNCHR's The Cry of Blood report.
BNP membership list
After briefly appearing on a blog, the membership list of the far-right British National Party was posted to WikiLeaks on 18 November 2008. The name, address, age and occupation of many of the 13,500 members were given, including several police officers, two solicitors, four ministers of religion, at least one doctor, and a number of primary and secondary school teachers. In Britain, police officers are banned from joining or promoting the BNP, and at least one officer was dismissed for being a member. The BNP was known for going to considerable lengths to conceal the identities of members. On 19 November, BNP leader Nick Griffin stated that he knew the identity of the person who initially leaked the list on 17 November, describing him as a "hardliner" senior employee who left the party in 2007. On 20 October 2009, a list of BNP members from April 2009 was leaked. This list contained 11,811 members.
2009
Congressional Research Service reports
On 7 February 2009, WikiLeaks released 6,780 Congressional Research Service reports.
Contributors to Coleman campaign
In March 2009, WikiLeaks published a list of contributors to the Norm Coleman senatorial campaign.
Climategate emails
In November 2009, controversial documents, including e-mail correspondence between climate scientists, were released (allegedly after being illegally obtained) from the University of East Anglia's (UEA) Climatic Research Unit (CRU). According to the university, the emails and documents were obtained through a server hacking; one prominent host of the full 120 MB archive was WikiLeaks.
Barclays Bank tax avoidance
In March 2009 documents concerning complex arrangements made by Barclays Bank to avoid tax appeared on Wikileaks. The documents had been ordered to be removed from the website of The Guardian. In an editorial on the issue, The Guardian pointed out that, due to the mismatch of resources, tax collectors (HMRC) now have to rely on websites such as Wikileaks to obtain such documents.
Internet censorship lists
WikiLeaks has published the lists of forbidden or illegal web addresses for several countries.
On 19 March 2009, WikiLeaks published what was alleged to be the Australian Communications and Media Authority's blacklist of sites to be banned under Australia's proposed laws on Internet censorship. Reactions to the publication of the list by the Australian media and politicians were varied. Particular note was made by journalistic outlets of the type of websites on the list; while the Internet censorship scheme submitted by the Australian Labor Party in 2008 was proposed with the stated intention of preventing access to child pornography and sites related to terrorism, the list leaked on WikiLeaks contains a number of sites unrelated to sex crimes involving minors. When questioned about the leak, Stephen Conroy, the Minister for Broadband, Communications and the Digital Economy in Australia's Rudd Labor Government, responded by claiming that the list was not the actual list, yet threatening to prosecute anyone involved in distributing it. On 20 March 2009, WikiLeaks published an updated list, dated 18 March 2009; it more closely matches the claimed size of the ACMA blacklist, and contains two pages that have been independently confirmed as blacklisted by ACMA.
WikiLeaks also contains details of Internet censorship in Thailand, including lists of censored sites dating back to May 2006.
Wikileaks published a list of web sites blacklisted by Denmark.
Bilderberg Group meeting reports
Since May 2009, WikiLeaks has made available reports of several meetings of the Bilderberg Group. It includes the group's history and meeting reports from the years 1955, 1956, 1957, 1958, 1960, 1962, 1963 and 1980.
2008 Peru oil scandal
On 28 January 2009, WikiLeaks released 86 telephone intercept recordings of Peruvian politicians and businessmen involved in the "Petrogate" oil scandal. The release of the tapes featured on the front pages of five Peruvian newspapers.
Nuclear accident in Iran
On 16 July 2009, Iranian news agencies reported that the head of Iran's atomic energy organization Gholam Reza Aghazadeh had abruptly resigned for unknown reasons after twelve years in office. Shortly afterwards WikiLeaks released a report disclosing a "serious nuclear accident" at the Iranian Natanz nuclear facility in 2009. The Federation of American Scientists (FAS) released statistics that say the number of enriched centrifuges operational in Iran mysteriously declined from about 4,700 to about 3,900 beginning around the time the nuclear incident WikiLeaks mentioned would have occurred.
According to media reports the accident may have been the direct result of a cyberattack at Iran's nuclear program, carried out with the Stuxnet computer worm.
Toxic dumping in Africa: The Minton report
In September 2006, commodities giant Trafigura commissioned an internal report about a toxic dumping incident in the Ivory Coast, which (according to the United Nations) affected 108,000 people. The document, called the Minton Report, names various harmful chemicals "likely to be present" in the waste and notes that some of them "may cause harm at some distance". The report states that potential health effects include "burns to the skin, eyes and lungs, vomiting, diarrhea, loss of consciousness and death", and suggests that the high number of reported casualties is "consistent with there having been a significant release of hydrogen sulphide gas".
On 11 September 2009, Trafigura's lawyers, Carter-Ruck, obtained a secret "super-injunction" against The Guardian, banning that newspaper from publishing the contents of the document. Trafigura also threatened a number of other media organizations with legal action if they published the report's contents, including the Norwegian Broadcasting Corporation and The Chemical Engineer magazine. On 14 September 2009, WikiLeaks posted the report.
On 12 October, Carter-Ruck warned The Guardian against mentioning the content of a parliamentary question that was due to be asked about the report. Instead, the paper published an article stating that they were unable to report on an unspecified question and claiming that the situation appeared to "call into question privileges guaranteeing free speech established under the 1689 Bill of Rights". The suppressed details rapidly circulated via the internet and Twitter and, amid uproar, Carter-Ruck agreed the next day to the modification of the injunction before it was challenged in court, permitting The Guardian to reveal the existence of the question and the injunction. The injunction was lifted on 16 October.
Kaupthing Bank
WikiLeaks made available an internal document from Kaupthing Bank from just prior to the collapse of Iceland's banking sector, which led to the 2008–2012 Icelandic financial crisis. The document shows that suspiciously large sums of money were loaned to various owners of the bank, and large debts written off. Kaupthing's lawyers have threatened WikiLeaks with legal action, citing banking privacy laws. The leak has caused an uproar in Iceland. Criminal charges relating to the multibillion-euro loans to Exista and other major shareholders are being investigated. The bank is seeking to recover loans taken out by former bank employees before its collapse.
Joint Services Protocol 440
In October 2009, Joint Services Protocol 440, a 2,400-page restricted document written in 2001 by the British Ministry of Defence was leaked. It contained instructions for the security services on how to avoid leaks of information by hackers, journalists, and foreign spies.
9/11 pager messages
On 25 November 2009, WikiLeaks released 570,000 intercepts of pager messages sent on the day of the September 11 attacks. Chelsea Manning (see below) commented that those were from an NSA database. Among the released messages are communications between Pentagon officials and New York City Police Department.
2010
U.S. Intelligence report on WikiLeaks
On 15 March 2010, WikiLeaks released a secret 32-page U.S. Department of Defense Counterintelligence Analysis Report from March 2008. The document described some prominent reports leaked on the website. These related to U.S. security interests, and described potential methods of marginalizing the organization. WikiLeaks editor Julian Assange said that some details in the Army report were inaccurate and its recommendations flawed, and also that the concerns of the U.S. Army raised by the report were hypothetical.
The report discussed deterring potential whistleblowers via termination of employment and criminal prosecution of any existing or former insiders, leakers or whistleblowers. Reasons for the report include notable leaks such as U.S. equipment expenditure, human rights violations in Guantanamo Bay, and the battle over the Iraqi town of Fallujah.
Baghdad airstrike video
On 5 April 2010, WikiLeaks released classified U.S. military footage from a series of attacks on 12 July 2007 in Baghdad by a U.S. helicopter that killed 12–18 people, including two Reuters news staff, Saeed Chmagh and Namir Noor-Eldeen, on a website called "Collateral Murder". The attack also wounded others including two children who were in a van that was fired on when it came to collect the wounded men. The footage consisted of a 39-minute unedited version and an 18-minute version that had been edited and annotated. According to some media reports, the Reuters news staff were in the company of armed men and the pilots may have thought Chmagh and Noor-Eldeen were carrying weapons (which was actually camera equipment). The footage includes audio from the American pilots during the shooting. After wounding two children one pilot says "Well, it’s their fault for bringing their kids into a battle".
The military conducted an investigation into the incident and found there were two rocket propelled grenade launchers and one AK-47 among the dead.
In the week following the release, "Wikileaks" was the search term with the most significant growth worldwide in the last seven days as measured by Google Insights.
Chelsea Manning
A 22-year-old US Army intelligence analyst, PFC (formerly SPC) Chelsea Manning (formerly Bradley Manning), was arrested after alleged chat logs were turned in to the authorities by former hacker Adrian Lamo, in whom she had confided. Manning reportedly told Lamo she had leaked the Baghdad airstrike video, in addition to a video of the Granai airstrike and around 260,000 diplomatic cables, to WikiLeaks. WikiLeaks said "allegations in Wired that we have been sent 260,000 classified US embassy cables are, as far as we can tell, incorrect." WikiLeaks have said that they are unable as yet to confirm whether or not Manning was actually the source of the video, stating "we never collect personal information on our sources", but that they have nonetheless "taken steps to arrange for (Manning's) protection and legal defence." On 21 June Julian Assange told The Guardian that WikiLeaks had hired three US criminal lawyers to defend Manning but that they had not been given access to her.
On 28 February 2013, Manning confessed in open court to providing vast archives of military and diplomatic files to WikiLeaks. She pleaded guilty to 10 criminal counts in connection with the huge amount of material she leaked, which included videos of airstrikes in Iraq and Afghanistan in which civilians were killed, logs of military incident reports, assessment files of detainees held at Guantánamo Bay, Cuba, and a quarter-million cables from American diplomats stationed around the world. She read a statement recounting how she joined the military, became an intelligence analyst in Iraq, decided that certain files should become known to the American public to prompt a wider debate about foreign policy, downloaded them from a secure computer network and then ultimately uploaded them to WikiLeaks.
Manning reportedly wrote, "Everywhere there's a U.S. post, there's a diplomatic scandal that will be revealed." According to The Washington Post, she also described the cables as "explaining how the first world exploits the third, in detail, from an internal perspective."
Afghan War Diary
On 25 July 2010, WikiLeaks released to The Guardian, The New York Times, and Der Spiegel over 92,000 documents related to the war in Afghanistan between 2004 and the end of 2009. The documents detail individual incidents including friendly fire and civilian casualties. The scale of the leak was described by Julian Assange as comparable to that of the Pentagon Papers in the 1970s. The documents were released to the public on 25 July 2010. On 29 July 2010 WikiLeaks added a 1.4 GB "insurance file" to the Afghan War Diary page, whose decryption details would be released if WikiLeaks or Assange were harmed.
About 15,000 of the 92,000 documents have not yet been released on WikiLeaks, as the group is currently reviewing the documents to remove some of the sources of the information. Speaking to a group in London in August 2010, Assange said that the group will "absolutely" release the remaining documents. He stated that WikiLeaks has requested help from the Pentagon and human-rights groups to help redact the names, but has not received any assistance. He also stated that WikiLeaks is "not obligated to protect other people's sources...unless it is from unjust retribution."
According to a report on the Daily Beast website, the Obama administration has asked Britain, Germany and Australia among others to consider bringing criminal charges against Assange for the Afghan war leaks and to help limit Assange's travels across international borders. In the United States, a joint investigation by the Army and the Federal Bureau of Investigation may try to prosecute "Mr. Assange and others involved on grounds they encouraged the theft of government property".
The Australia Defence Association (ADA) stated that WikiLeaks' Julian Assange "could have committed a serious criminal offence in helping an enemy of the Australian Defence Force (ADF)." Neil James, the executive director of ADA, states: "Put bluntly, Wikileaks is not authorised in international or Australian law, nor equipped morally or operationally, to judge whether open publication of such material risks the safety, security, morale and legitimate objectives of Australian and allied troops fighting in a UN-endorsed military operation."
WikiLeaks' recent leaking of classified U.S. intelligence has been described by commentator of The Wall Street Journal as having "endangered the lives of Afghan informants" and "the dozens of Afghan civilians named in the document dump as U.S. military informants. Their lives, as well as those of their entire families, are now at terrible risk of Taliban reprisal." When interviewed, Assange stated that WikiLeaks has withheld some 15,000 documents that identify informants to avoid putting their lives at risk. Specifically, Voice of America reported in August 2010 that Assange, responding to such criticisms, stated that the 15,000 still held documents are being reviewed "line by line," and that the names of "innocent parties who are under reasonable threat" will be removed.<ref
name="voa2010Aug21"></ref> Greg Gutfeld of Fox News described the leaking as "WikiLeaks' Crusade Against the U.S. Military." John Pilger has reported that prior to the release of the Afghan War Diaries in July, WikiLeaks contacted the White House in writing, asking that it identify names that might draw reprisals, but received no response.
According to the New York Times, Amnesty International and Reporters Without Borders criticized WikiLeaks for what they saw as risking people's lives by identifying Afghans acting as informers. A Taliban spokesman said that the Taliban had formed a nine-member "commission" to review the documents "to find about people who are spying." He said the Taliban had a "wanted" list of 1,800 Afghans and was comparing that with names WikiLeaks provided, stating "after the process is completed, our Taliban court will decide about such people."
Love Parade documents
Following the Love Parade stampede in Duisburg, Germany on 24 July 2010, the local news blog Xtranews published internal documents of the city administration regarding Love Parade planning and actions by the authorities. The city government reacted by acquiring a court order on 16 August forcing Xtranews to remove the documents from its blog. Two days later, however, after the documents had surfaced on other websites as well, the government stated that it would not conduct any further legal actions against the publication of the documents. On 20 August WikiLeaks released a publication titled Loveparade 2010 Duisburg planning documents, 2007–2010, which comprised 43 internal documents regarding the Love Parade 2010.
Iraq War logs
In October 2010, it was reported that WikiLeaks was planning to release up to 400,000 documents relating to the Iraq War. Julian Assange initially denied the reports, stating: "WikiLeaks does not speak about upcoming releases dates, indeed, with very rare exceptions we do not communicate any specific information about upcoming releases, since that simply provides fodder for abusive organizations to get their spin machines ready." The Guardian reported on 21 October 2010 that it had received almost 400,000 Iraq war documents from WikiLeaks. On 22 October 2010, Al Jazeera was the first to release analyses of the leak, dubbed The War Logs. WikiLeaks posted a tweet that "Al Jazeera have broken our embargo by 30 minutes. We release everyone from their Iraq War Logs embargoes." This prompted other news organizations to release their articles based on the source material. The release of the documents coincided with a return of the main wikileaks.org website, which had been offering no content since 30 September 2010.
The BBC quoted The Pentagon referring to the Iraq War Logs as "the largest leak of classified documents in its history." Media coverage of the leaked documents focused on claims that the U.S. government had ignored reports of torture by the Iraqi authorities after the 2003 war.
State Department diplomatic cables release
On 22 November 2010 an announcement was made by the WikiLeaks Twitter feed that the next release would be "7x the size of the Iraq War Logs." U.S. authorities and the media speculated that they contained diplomatic cables. Prior to the expected leak, the government of the United Kingdom (UK) sent a DA-Notice to UK newspapers, which requests advance notice from the newspapers regarding the expected publication. According to Index on Censorship, "there is no obligation on media to comply". "Newspaper editors would speak to [the] Defence, Press and Broadcasting Advisory Committee prior to publication." The Pakistani newspaper Dawn stated that the U.S. newspapers The New York Times and The Washington Post were expected to publish parts of the diplomatic cables on Sunday 28 November, including 94 Pakistan-related documents.
On 26 November, via his lawyer Jennifer Robinson, Assange sent a letter to the US Department of State, asking for information regarding people who could be placed at "significant risk of harm" by the diplomatic cables release. Harold Koh, Legal Adviser of the Department of State, refused the proposal, stating, "We will not engage in a negotiation regarding the further release or dissemination of illegally obtained U.S. Government classified materials."
On 28 November, WikiLeaks announced it was undergoing a massive distributed denial-of-service attack, but vowed to still leak the cables and documents via prominent media outlets including El País, Le Monde, Der Spiegel, The Guardian, and The New York Times. The announcement was shortly thereafter followed by the online publication, by The Guardian, of some of the purported diplomatic cables, including one in which United States Secretary of State Hillary Clinton apparently orders diplomats to obtain credit card and frequent flier numbers of the French, British, Russian and Chinese delegations to the United Nations Security Council. Other revelations reportedly include that several Arab nations urged the U.S. to launch a first strike on Iran, that the Chinese government was directly involved in computer hacking, and that the U.S. is pressuring Pakistan to turn over nuclear material to prevent it from falling into the wrong hands. The cables also include unflattering appraisals of world leaders.
In December 2010, Der Spiegel reported that one of the cables showed that the US had placed pressure on Germany not to pursue the 13 suspected CIA agents involved in the 2003 abduction of Khalid El-Masri, a German citizen. The abduction was probably carried out through "extraordinary rendition". German prosecutors in Munich had issued arrest warrants for the 13 suspected CIA operatives involved in the abduction. The cables released by Wikileaks showed that after contact from the then-Deputy US Ambassador John M. Koenig and US diplomats the Munich public prosecutor's office and Germany's Justice Ministry and Foreign Ministry all cooperated with the US and the agents were not extradited to Germany.
Despite the steps taken by United States Government forbidding all unauthorized federal government employees and contractors from accessing classified documents publicly available on WikiLeaks, in the week following the release (28 November – 5 December 2010), "Wikileaks" remained the top search term in United States as measured by Google Insights.
U.S. Secretary of State Hillary Clinton responded to the leaks saying, "This disclosure is not just an attack on America's foreign policy; it is an attack on the international community, the alliances and partnerships, the conventions and negotiations that safeguard global security and advance economic prosperity." Julian Assange is quoted as saying, "Of course, abusive, Titanic organizations, when exposed, grasp at all sorts of ridiculous straws to try and distract the public from the true nature of the abuse." John Perry Barlow, co-founder of the Electronic Frontier Foundation, wrote a tweet saying: "The first serious infowar is now engaged. The field of battle is WikiLeaks. You are the troops."
2011
Guantanamo Bay files
On 24 April 2011 WikiLeaks began a month-long release of 779 US Department of Defense documents about detainees at the Guantanamo Bay detention camp.
The Spy Files
On 1 December 2011 WikiLeaks started to release the Spy Files.
2012
The Global Intelligence Files
On 27 February 2012, WikiLeaks began to publish what it called "The Global Intelligence Files", more than 5,000,000 e-mails from Stratfor dating from July 2004 to late December 2011. It was said to show how a private intelligence agency operates and how it targets individuals for their corporate and government clients. A few days before, on 22 February, WikiLeaks had released its second insurance file via BitTorrent. The file is named "wikileaks-insurance-20120222.tar.bz2.aes" and about 65 GB in size.
Syria Files
On 5 July 2012, WikiLeaks began publishing the Syria Files, more than two million emails from Syrian political figures, ministries and associated companies, dating from August 2006 to March 2012.
2013
PlusD
In April 2013, WikiLeaks releases 1.7 million U.S. diplomatic and intelligence reports including Kissinger cables.
Prosecution and prison documents for Anakata
Released on 19 May 2013.
Spy Files 3
Wednesday 4 September 2013 at 1600 UTC, WikiLeaks released 'Spy Files #3' – 249 documents from 92 global intelligence contractors.
Draft Trans-Pacific Partnership Agreement IP Charter
Draft text for the Trans-Pacific Partnership Agreement Intellectual Property charter.
2014
Trade in Services Agreement chapter draft
WikiLeaks published a secret draft of the Financial Services Annex of the Trade in Services Agreement in June 2014. On its website, the organization provided an analysis of the leaked document. TISA, an international trade deal aimed at market liberalization, covers 50 countries and 68% of the global services industry. The agreement's negotiations have been criticized for a lack of transparency.
Australian bribery case suppression order
On 29 July 2014, WikiLeaks released a secret gagging order issued by the Supreme Court of Victoria that forbid the Australian press from coverage of a multimillion-dollar bribery investigation involving the nation's central bank and several international leaders. Indonesian, Vietnamese, Malaysian and Australian government officials were named in the order, which was suppressed to "prevent damage to Australia's international relations that may be caused by the publication of material that may damage the reputations of specified individuals who are not the subject of charges in these proceedings."
Public criticism of the suppression order followed the leak. Human Rights Watch General Counsel Dinah PoKempner, said "Secret law is often unaccountable and inadequately justified. The government has some explaining to do as to why it sought such an extraordinary order, and the court should reconsider the need for it now that its action has come to light." At a media conference, Indonesian president Susilo Bambang Yudhoyono condemned the gagging order, calling for an open and transparent investigation.
2015
TPP Investment Chapter
On 25 March 2015 WikiLeaks released the "Investment Chapter" from the secret negotiations of the TPP (Trans-Pacific Partnership) agreement.
Sony archives
On 16 April 2015, WikiLeaks published a searchable version of the Sony Archives which were originally obtained in November 2014 by hacker group "Guardians of Peace". The leaked records contained 30,287 documents from Sony Pictures Entertainment (SPE) and 173,132 emails between more than 2,200 SPE email addresses. SPE is a US subsidiary of the Japanese multinational technology and media corporation Sony, that handles film and TV production and distribution operations.
Containing published communications between SPE and over 100 US government email addresses, the archives revealed that the influential corporation has direct ties to the White House and the US military-industrial complex, allowing opportunities to influence laws and policies.
WikiLeaks editor-in-chief Julian Assange said: "This archive shows the inner workings of an influential multinational corporation. It is newsworthy and at the centre of a geo-political conflict. It belongs in the public domain. WikiLeaks will ensure it stays there."
Trident Nuclear Weapons System
Whistle blower, Royal Navy Able Seaman William McNeilly exposed serious security issues relate to the UK's nuclear weapons system.
The Saudi Cables
In June 2015 Wikileaks began publishing confidential and secret Saudi Arabian government documents. Julian Assange said that "The Saudi Cables lift the lid on a increasingly erratic and secretive dictatorship that has not only celebrated its 100th beheading this year, but which has also become a menace to its neighbours and itself".
Cables from early 2013 indicate that the British government under David Cameron may have traded votes with Saudi Arabia to support each other's election to the United Nations Human Rights Council (UNHRC) for the period 2014–2016. Both Britain and Saudi Arabia joined the UNHRC in the election held in 2013. UN Watch expressed concern at the report saying that UNHRC must be chosen based on upholding the highest standards of human rights.
2016
DNC email leak
On 22 July 2016, WikiLeaks released nearly 20,000 e-mails and over 8,000 attachments from the Democratic National Committee (DNC), the governing body of the U.S. Democratic Party. The leak includes emails from seven key DNC staff members, and date from January 2015 to May 2016. The collection of emails allegedly disclose the bias of key DNC staffers against the presidential campaign of Senator Bernie Sanders in favor of Hillary Clinton's campaign. WikiLeaks did not reveal their source.
Podesta emails
On 7 October 2016, WikiLeaks started releasing emails from John Podesta, the chairman of Hillary Clinton's 2016 presidential campaign. The emails provide some insight to the inner workings of Clinton's campaign. One of the emails contained 25 excerpts from Clinton's paid Wall Street speeches. Another leaked document included eighty pages of Clinton's Wall Street speeches. Also among these emails was an email from Donna Brazile to Podesta that suggested that Brazile had received a town hall debate question in advance and was sharing it with Clinton. One of the emails released on 12 October 2016 included Podesta's iCloud account password. His iCloud account was reportedly hacked, and his Twitter account was briefly compromised. Some emails from revealed emails that Barack Obama and Podesta exchanged in 2008.
The Clinton campaign has declined to authenticate these leaks. Glen Caplin, a spokesman for the Clinton campaign, said, "By dribbling these out every day WikiLeaks is proving they are nothing but a propaganda arm of the Kremlin with a political agenda doing [Vladimir] Putin's dirty work to help elect Donald Trump."
The New York Times reported that when asked, president Vladimir Putin replied that Russia was being falsely accused. Julian Assange has also denied that Russia is the source.
Yemen files
On 25 November 2016, WikiLeaks released emails and internal documents that provided details on the US military operations in Yemen from 2009 to March 2015. In a statement accompanying the release of the "Yemen Files", Assange said about the US involvement in the Yemen war: "The war in Yemen has produced 3.15 million internally displaced persons. Although the United States government has provided most of the bombs and is deeply involved in the conduct of the war itself reportage on the war in English is conspicuously rare".
PlusD
On 28 November 2016, WikiLeaks released more than 500,000 diplomatic cables sent by the United States Department of State in 1979 during the presidency of Jimmy Carter.
German BND-NSA Inquiry
On 1 December 2016, WikiLeaks released 2,420 documents which it claims are from the German Parliamentary Committee investigating the NSA spying scandal. German security officials at first suspected the documents were obtained from a 2015 cyberattack on the Bundestag, but now suspect it was an internal leak.
Turkish AK Party emails
Turkey blocked access to WikiLeaks after the website released emails from Turkey's ruling Justice and Development Party, or AKP, in response to Erdoğan’s post-coup purges against political dissent.
2017
CIA espionage orders
On 16 February 2017, WikiLeaks released a purported report on CIA espionage orders (marked as NOFORN) for the 2012 French presidential election. The order called for details of party funding, internal rivalries and future attitudes toward the United States. The Associated Press noted that "the orders seemed to represent standard intelligence-gathering."
Vault 7
In March 2017, WikiLeaks has published more than 8,000 documents on the CIA. The confidential documents, codenamed Vault 7, dated from 2013 to 2016, included details on the CIA's software capabilities, such as the ability to compromise cars, smart TVs, web browsers (including Google Chrome, Microsoft Edge, Firefox, and Opera), and the operating systems of most smartphones (including Apple's iOS and Google's Android), as well as other operating systems such as Microsoft Windows, macOS, and Linux. WikiLeaks did not name the source, but said that the files had "circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive."
Spy Files Russia
In September 2017, WikiLeaks released "Spy Files Russia," revealing "how a Saint Petersburg-based technology company called Peter-Service helped state entities gather detailed data on Russian mobile phone users, part of a national system of online surveillance called System for Operative Investigative Activities (SORM)."
2018
ICE Patrol
On 22 June 2018, Wikileaks published documents containing the personal details of many U.S. Immigration and Customs Enforcement (ICE) employees with the declared aim of "understanding ICE programs and increasing accountability, especially in light of the extreme actions taken by ICE lately, such as the separation of children and parents at the US border".
Allegation of a corrupted broker in France-UAE arms deal
On 28 September 2018, WikiLeaks published information related to a dispute over a commission payment for an arms deal between a French state-owned company GIAT Industries SA (now Nexter Systems) and the United Arab Emirates (UAE). The deal, which was signed in 1993 and was due for completion in 2008, involved the sale by Nexter to the UAE of 46 armoured vehicles, 388 Leclerc combat tanks, 2 training tanks, spare parts and ammunition. The dispute was brought to the International Chamber of Commerce (ICC) by Abbas Ibrahim Yousef Al Yousef, who acted as broker between the UAE and Nexter Systems. Yousef claimed that he was paid $40 million less than the $235 million he was promised by Nexter. Nexter justified stopping payments by saying that Yousef's company, Kenoza Consulting and Management, Inc., registered in the British Virgin Islands, had committed corrupt acts by, among other things, using German engines in its tanks, which violated laws forbidding arms sales from Germany to the Middle East. Yousef claimed he had obtained a waiver from those laws using lobby groups to contact "decision makers at the highest levels, both in France and Germany". Yousef's claims against Nexter Systems were dismissed when it became known that his charge from the deal would have been much less had he been paid on retainer.
2019
Organisation for the Prohibition of Chemical Weapons
Between October and December 2019, Wikileaks published four batches of internal documents from the Organisation for the Prohibition of Chemical Weapons related to its investigation of the alleged chemical attack in Douma in April 2018.
2021
Intolerance Network
In 2021, Wikileaks published 17,000 documents from the right-wing groups HazteOir and CitizenGo.
Complete list
This article only covers a small subset of the leaked documents—those that have attracted significant attention in the mainstream press. Wikileaks has the complete list, organised by country or by year through 2010.
Unpublished material
In October 2009 Computer World published an interview with Assange in which he claimed to be in possession of "5GB from Bank of America" that was from "one of the executive's hard drives." In November 2010 Forbes magazine published another interview with Assange in which he said WikiLeaks was planning another "megaleak" for early in 2011, which this time would be from inside the private sector and involve "a big U.S. bank". Bank of America's stock price fell by three percent following this announcement. Assange commented on the possible impact of the release that "it could take down a bank or two." However, WikiLeaks claims that the information is among the documents that former spokesperson Daniel Domscheit-Berg claimed to have destroyed in August 2011.
In March 2010, Daniel Domscheit-Berg, at the time WikiLeaks' spokesperson, announced on a podcast that the organization had in its possession around 37,000 internal e-mails from far-right National Democratic Party of Germany. He stated explicitly that he was not working on this project himself because it would make him legally vulnerable as a German citizen. According to him, Wikileaks was working on a crowd sourcing-based tool to exploit such masses of data. WikiLeaks claimed that these e-mails (which it claimed numbered 60,000) were among the documents that Domscheit-Berg claimed to have destroyed in August 2011.
In May 2010, WikiLeaks said it had video footage of an alleged massacre of Afghan civilians by the U.S. military, which it said it was preparing to release. However, this may have been among the videos that WikiLeaks reported that former spokesperson Domscheit-Berg destroyed in August 2011.
In July 2010 during an interview with Chris Anderson, Assange showed a document WikiLeaks had on an Albanian oil well blowout, and said it also had material from inside BP, and that it was "getting [an] enormous quantity of whistle-blower disclosures of a very high caliber" but added that WikiLeaks has not been able to verify and release the material because it does not have enough volunteer journalists.
In a September 2010 Twitter post, WikiLeaks stated that it had a first-edition copy of Operation Dark Heart, a memoir by a U.S. Army intelligence officer. The uncensored first printing of around 9,500 copies was purchased and destroyed by the U.S. Department of Defense in its entirety.
In October 2010, Assange told a leading Moscow newspaper that "[t]he Kremlin had better brace itself for a coming wave of WikiLeaks disclosures about Russia." In late November, Assange stated, "we have material on many businesses and governments, including in Russia. It's not right to say there's going to be a particular focus on Russia". On 23 December 2010, the Russian newspaper Novaya Gazeta announced that it had been granted access to a wide range of materials from the WikiLeaks database. The newspaper said that it will begin releasing these materials in January 2011, with an eye toward exposing corruption in the Russian government.
In December 2010, Assange's lawyer, Mark Stephens, said on The Andrew Marr Show that WikiLeaks had information that it considers to be a "thermo-nuclear device" that it would release if the organisation needs to defend itself.
In January 2011, Rudolf Elmer hand delivered two CDs to Assange during a news conference in London. Elmer claimed the CDs contain the names of around 2,000 tax-evading clients of the Swiss bank Julius Baer.
In February 2011 in his memoir, Inside WikiLeaks: My Time with Julian Assange at the World's Most Dangerous Website, Daniel Domscheit-Berg acknowledged that he and another former WikiLeaks volunteer have material submitted to WikiLeaks in their possession (as well as the source code to the site's submission system) and that they would only return to the organization once it repaired its security and online infrastructure. However, in August 2011 Domscheit-Berg announced that he destroyed all 3,500 documents in his possession. The German newspaper Der Spiegel reported that the documents included the U.S. government's No Fly List. WikiLeaks also claimed that the data destroyed by Domscheit-Berg included the No Fly List. This is the first mention of WikiLeaks having had possession of the No Fly List. WikiLeaks also claimed that the data destroyed included information that it had previously announced was its possession but had not released publicly. This information includes "five gigabytes from the Bank of America" (which was previously reported to be in WikiLeaks' possession in October 2009), "60,000 emails from the NPD" (which Domscheit-Berg divulged to be in Wikileaks' possession in March 2010, back when he still worked with the organization), and "videos of a major US atrocity in Afghanistan" (which perhaps include the one it claimed to have in May 2010) Additionally, WikiLeaks claimed that the documents destroyed included "the internals of around 20 neo-Nazi organizations" and "US intercept arrangements for over a hundred internet companies". Neither of these two leaks were reported to have been in WikiLeaks' possession before.
See also
List of public disclosures of classified information
List of government surveillance projects
References |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.